From austin.sun at intel.com Mon Feb 1 05:30:33 2021 From: austin.sun at intel.com (Sun, Austin) Date: Mon, 1 Feb 2021 05:30:33 +0000 Subject: [Starlingx-discuss] StarlingX Distro-OpenStack: Bi-weekly Project Meeting -- Cancel 2nd Feb and 16th Feb Message-ID: Hi All: Just let you know, There are no Distro-OpenStack Project Meeting on 2nd Feb and 16th Feb as no much topic and Chinese New Year. Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Mon Feb 1 05:26:08 2021 From: austin.sun at intel.com (Sun, Austin) Date: Mon, 1 Feb 2021 05:26:08 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Distro-OpenStack: Bi-weekly Project Meeting Message-ID: Hi folks, This is a new series of bi-weekly project meeting on StarlingX Distro-OpenStack. Your participation to this meeting and/or other offline contribution by all means are highly appreciated! Project Team Etherpad: https://etherpad.openstack.org/p/stx-distro-openstack-meetings The Winter Time Slot for this meeting : CST: 10:00 PM(China, Shanghai ) PST: 7:00 AM (US West , US, Oregon) EST: 9:00 AM (East Canada , Canada Ottawa) Thanks BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3306 bytes Desc: not available URL: From austin.sun at intel.com Mon Feb 1 05:26:25 2021 From: austin.sun at intel.com (Sun, Austin) Date: Mon, 1 Feb 2021 05:26:25 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Distro-OpenStack: Bi-weekly Project Meeting Message-ID: Hi folks, This is a new series of bi-weekly project meeting on StarlingX Distro-OpenStack. Your participation to this meeting and/or other offline contribution by all means are highly appreciated! Project Team Etherpad: https://etherpad.openstack.org/p/stx-distro-openstack-meetings The Winter Time Slot for this meeting : CST: 10:00 PM(China, Shanghai ) PST: 7:00 AM (US West , US, Oregon) EST: 9:00 AM (East Canada , Canada Ottawa) Thanks BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3306 bytes Desc: not available URL: From Bill.Zvonar at windriver.com Mon Feb 1 12:33:48 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Mon, 1 Feb 2021 12:33:48 +0000 Subject: [Starlingx-discuss] Cengn Mirror server in down?! In-Reply-To: <66da5e8d-cc2b-1e13-dd4f-a8c5afa2f97d@windriver.com> References: <75da487b-d473-1d0f-8504-519a664f4f6c@windriver.com> <34128b7c-d78d-13a5-8682-99b8615487a5@windriver.com> <7428784a-0ee0-5f86-fb4d-bc32213f11a3@windriver.com> <6d5f08e9-2250-4183-8295-2b9f9375557e@windriver.com> <66da5e8d-cc2b-1e13-dd4f-a8c5afa2f97d@windriver.com> Message-ID: Thanks again for all the work on this Scott. From: Scott Little Sent: Sunday, January 31, 2021 4:26 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Cengn Mirror server in down?! Update The CENGN builds will now return to the regular daily schedule. None of the new builds have been sanitized as yet. Until those builds are sanitized, it might be advisable to keep software updates to a minimum. I'll defer to the TSC on that one. However from a purely CENGN/mirror/build perspective, I think we are ready to allow updates to .lst files again. Scott On 2021-01-31 4:11 p.m., Scott Little wrote: Update CENGN hasn't found any specific network issues as of yet. They have acknowledged that the current network configuration implemented in the wake of power outage and the switch to the new server is sub-optimal and will revisit it soon. We have worked over the last several days to get the firewall configuration right to allow all the required services on the build server to run. One particular quirk of the current network setup is that the DNS reported address of the mirror is inaccessible. An /etc/hosts entry is required to force the mirror address to resolve to an alternate IP address. I've spent the last several sleepless nights learning how to push that change not only into our build container, but into all the STX service containers that we build. I can finally report successful builds of the entire layered build stack. On the mirror update side, I have adjusted the mirroring algorithm to do more validation steps as it downloads. This should prevent further corruptions, but will slow the frequency with with we can refresh our repos from upstream. In stead of updating all repos daily, some will be on a ~1/week schedule. Scott On 2021-01-29 1:02 a.m., Scott Little wrote: Update Well a few more networking issues have been solved. thanks to Davlet, and Moh at CENGN. We have had a successful build of the 'compiler' and 'distro' layers. The 'flock' layer built the rpms, but failed to update the installer. It appears to relate to the loopback device. I'll look into that tomorrow. ~350 RPMS failed to restore from upstream. None are critical to any of our current builds. I don't plan to persu them any further The mirroring job is throwing a lot more error messages than I'm used to seeing. Many rpm download attempts are failing, and left 2000 new corrupt rpms in its wake. I am cleaning up the new mess and adjusting the mirroring script to prevent download failures from leaving us with corrupt repos. The download failures seem to come in batches, so I'll reach out to CENGN to see if they had any network issues that might explain the problem. Scott On 2021-01-28 12:27 a.m., Scott Little wrote: Update All known firewall issues have been fixed. Jenkins is now up and running on the build server once again. Our first manually triggered build attempt of the compiler layer is now running Of the original ~13000 corrupted rpms, we now have ... ~9250 have been restored, including all rpms that are knowm inputs to supported builds ~3100 damaged rpms from the 2.0.0 release will not restored. TSC and community meetings agreed that 2.0.0, being 2 releases out of date, is no longer supported. ~650 Have yet to be restored from Upstream. The jenkins job that pulls from upstream sources continues to run. I would like to have successful CENGN builds of all layers before you all hammer the new server with your pent-up download_mirror.sh requests. Please wait a little longer. We should be back in business sometime tomorrow. Scott On 2021-01-27 9:56 a.m., Scott Little wrote: Update Of the original ~13000 corrupted rpms, we now have ... 5700 have been restored 2400 have yet to be assessed 1800 having no backup within cengn. Upstream availability has yet to be determined. 3100 I propose to drop as they fall within the 2.0.0 release which is in very bad shape, whereas 2.0.1 is in fairly good shape and worth saving. Scott On 2021-01-27 1:37 a.m., Scott Little wrote: The audit of repos identified ~13000 corrupted rpms So far ~5000 have been restored from redundant copies within cengn. Aprox 7500 have yet to be assessed. Those remaining rpms are inputs or outputs of actual builds. The rpms produced by builds might carry the same files name, but may contain different content due to embedded file dates if not real changes. Extra care is required to restore from the correct source. If an exact match is unavailable, I may be forced to use a source that is 'near' in time and therefore less likely to contain anything other than trivial file timestamp deltas. I'm still pondering the implications of such a substitution. That leaves ~500 having no backup within cengn. Upstream availability has yet to be determined. Scott On 2021-01-26 1:28 a.m., Scott Little wrote: The jenkins in the cengn build is offline while we repair the file system on the mirror. The build machine both read from, and writes to the mirror, and I don't want it to cause further corruptions while the mirror is being repaired. The cengn mirror has been cut over to the new storage back end. However any corrupt files from the old storage back end will remain corrupted on the new one. Restoration of rpm repos will proceed in several phases: 1) An audit of all rpm repos is underway. Rpm's that fail their checksum are being removed and repodata will be updated to reflect the missing RPMS. 2) The mirror update job (downloading rpms from upstream rpm repos) will be run to restore as many missing files as possible. Many rpms may no longer be available as we kept old versions of packages that upstream drops when a new version is available, 3) If rpms that are not available from upstream, but are required for our build, we can attempt to restore them from one of several places. e.g. the inputs directory of various builds, or the downloads directory on the cengn build server (independent storage from the mirror) 4) Resume building new loads All of the release iso's on cengn were corrupt. I've been able to restore 4.0.1 and 2.0.1 from the original build artifacts. Original build artifacts for 3.0.0 are missing. I do have build artifacts for builds that were intended to lead up to a 3.0.1 release. If a verifiable copy of 3.0.0 can't be found, we may want to consider issuing a 3.0.1. Alternatively I can rebuild 3.0.0, but the file checksums wont match the original build due to embedded timestamps. Scott On 2021-01-25 11:08 a.m., Little, Scott wrote: Work continues at CENGN repair damaged repos and to move us to a new backend infrastructure with better redundancy. I still recommend that you avoid using download_mirrir.sh until the repos are fully repaired. I don't have a firm ETA for that, perhaps two before we have full confidence in the repairs. Scott ________________________________ From: Scott Little Sent: January 23, 2021 2:24 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Cengn Mirror server in down?! CENGN is up, but there seems to be significant damage to the repositories. We are investigate options to restore the repositories to health. I do NOT recommend running download_mirror.sh at this time. Scott On 2021-01-22 3:53 p.m., Scott Little wrote: CENGN remains out of service. A new problem has been identified with ceph storage. They continue to work the outage. Scott On 2021-01-22 9:55 a.m., Scott Little wrote: CENGN remains out of service AC to the server room has been restored. They are now working some firewall issues. They hope to have it restored in a few hours. Scott On 2021-01-21 10:07 a.m., Panech, Davlet wrote: The hosting company had an issue with their servers, they are working on it. They hope to be back online mid afternoon (Jan 21, Eastern Standard Time). ________________________________ From: Panech, Davlet Sent: January 21, 2021 9:56 AM To: Dimofte, Alexandru ; starlingx-discuss at lists.starlingx.io Subject: Re: Cengn Mirror server in down?! Thanks Alexandru, we are looking into it. ________________________________ From: Dimofte, Alexandru Sent: January 21, 2021 3:24 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Cengn Mirror server in down?! Hello guys, I observed, trying to connect to http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ , that cengn mirror server is down. Best regards, Alex [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 5212 bytes Desc: image002.png URL: From alexandru.dimofte at intel.com Mon Feb 1 14:33:23 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Mon, 1 Feb 2021 14:33:23 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210129T180654Z Message-ID: Sanity Test from 2021-January-29 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210129T180654Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210129T180654Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From Frank.Miller at windriver.com Mon Feb 1 16:45:45 2021 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 1 Feb 2021 16:45:45 +0000 Subject: [Starlingx-discuss] Plan forward for stx-containerization discussions Message-ID: StarlingX team: The requirements for a regular meeting on containerization of services has been reduced over the past few months. I propose this meeting be dropped and any topics around containerization be covered via email or if in person discussion is needed then cover this in one of the flock meetings. If there is a strong need for a separate dedicated meeting on containerization please reply and provide the reasoning. If we decide to continue with a separate meeting I will need to find a new day and timeslot. Frank PL StarlingX containerization -------------- next part -------------- An HTML attachment was scrubbed... URL: From OwenYuen at cmail.carleton.ca Mon Feb 1 18:46:26 2021 From: OwenYuen at cmail.carleton.ca (Owen Yuen) Date: Mon, 1 Feb 2021 18:46:26 +0000 Subject: [Starlingx-discuss] Cannot configure controller-0 on subcloud after bootstrapping from central cloud In-Reply-To: References: , , Message-ID: Thanks Bart, Another question: I want to clarify in the distributed cloud setup, when running the config_management script it asks for the “management interface address CIDR”. Is this address referring to the oam_node_0_address or the oam_floating_address? For instance, in https://docs.starlingx.io/deploy_install_guides/r5_release/virtual/aio_duplex_install_kubernetes.html if I had followed the example for the “minimal config override file” provided, would I use the 10.10.10.2 or the 10.10.10.3? $ sudo config_management Enabling interfaces... DONE Waiting 120 seconds for LLDP neighbor discovery... Retrieving neighbor details... DONE Available interfaces: local interface remote port --------------- ---------- enp0s3 08:00:27:c4:6c:7a enp0s8 08:00:27:86:7a:13 enp0s9 unknown Enter management interface name: enp0s3 Enter management address CIDR: 10.10.10.?/24 Enter management gateway address [10.10.10.1]: Enter System Controller subnet: 10.10.10.0/24 Disabling non-management interfaces... DONE Configuring management interface... DONE RTNETLINK answers: File exists Adding route to System Controller... DONE Thanks Owen From: Wensley, Barton Sent: Friday, January 29, 2021 8:38 AM To: Owen Yuen; starlingx-discuss at lists.starlingx.io Subject: RE: Cannot configure controller-0 on subcloud after bootstrapping from central cloud [External Email] Owen, You add the route after unlocking the subcloud active controller. Bart From: Owen Yuen Sent: Thursday, January 28, 2021 5:45 PM To: Wensley, Barton ; starlingx-discuss at lists.starlingx.io Subject: RE: Cannot configure controller-0 on subcloud after bootstrapping from central cloud [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Barton, Another question: The last step of the distributed cloud guide says to add a route from the subcloud management network to the central cloud management network. Is this done AFTER I unlock the subcloud active controller or before? I’m approaching the end and want to make sure I do everything correct. Thanks Owen From: Wensley, Barton Sent: Thursday, January 28, 2021 8:24 AM To: Owen Yuen; starlingx-discuss at lists.starlingx.io Subject: RE: Cannot configure controller-0 on subcloud after bootstrapping from central cloud [External Email] Owen – see my comments below… Bart From: Owen Yuen > Sent: Wednesday, January 27, 2021 10:04 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Cannot configure controller-0 on subcloud after bootstrapping from central cloud [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, I’m a student so please bear with me if my questions are rudimentary. I’m trying to setup a AIO duplex distributed cloud setup with one central cloud and one subcloud. I’ve managed to bootstrap the subcloud from the central cloud successfully and the next steps says to continue the installation from the AIO duplex guide starting from “Configure controller-0”. On step 2, I need to run system commands but all of them require the host to be unlocked. I’m not able to unlock the host either since I’m on the host itself. I assume these steps are done on the subcloud controller-0 correct? [Bart] Yes - these steps are done on the subcloud controller-0. They are done BEFORE you unlock the subcloud controller – you will see that following these steps will eventually lead to unlocking the subcloud controller. Any help will be appreciated. Thanks Owen This email contains links to content or websites. Always be cautious when clicking on external links or attachments. If in doubt, please forward suspicious emails to phishing at carleton.ca. -----End of Disclaimer----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Mon Feb 1 18:55:22 2021 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Mon, 1 Feb 2021 18:55:22 +0000 Subject: [Starlingx-discuss] Cannot configure controller-0 on subcloud after bootstrapping from central cloud In-Reply-To: References: , , Message-ID: Use the floating OAM address for the subcloud. It must match the external_oam_floating_address you will specify in the bootstrap-values.yml file you will use for that subcloud. Bart From: Owen Yuen Sent: Monday, February 1, 2021 1:46 PM To: Wensley, Barton ; starlingx-discuss at lists.starlingx.io Subject: RE: Cannot configure controller-0 on subcloud after bootstrapping from central cloud [Please note: This e-mail is from an EXTERNAL e-mail address] Thanks Bart, Another question: I want to clarify in the distributed cloud setup, when running the config_management script it asks for the "management interface address CIDR". Is this address referring to the oam_node_0_address or the oam_floating_address? For instance, in https://docs.starlingx.io/deploy_install_guides/r5_release/virtual/aio_duplex_install_kubernetes.html if I had followed the example for the "minimal config override file" provided, would I use the 10.10.10.2 or the 10.10.10.3? $ sudo config_management Enabling interfaces... DONE Waiting 120 seconds for LLDP neighbor discovery... Retrieving neighbor details... DONE Available interfaces: local interface remote port --------------- ---------- enp0s3 08:00:27:c4:6c:7a enp0s8 08:00:27:86:7a:13 enp0s9 unknown Enter management interface name: enp0s3 Enter management address CIDR: 10.10.10.?/24 Enter management gateway address [10.10.10.1]: Enter System Controller subnet: 10.10.10.0/24 Disabling non-management interfaces... DONE Configuring management interface... DONE RTNETLINK answers: File exists Adding route to System Controller... DONE Thanks Owen From: Wensley, Barton Sent: Friday, January 29, 2021 8:38 AM To: Owen Yuen; starlingx-discuss at lists.starlingx.io Subject: RE: Cannot configure controller-0 on subcloud after bootstrapping from central cloud [External Email] Owen, You add the route after unlocking the subcloud active controller. Bart From: Owen Yuen > Sent: Thursday, January 28, 2021 5:45 PM To: Wensley, Barton >; starlingx-discuss at lists.starlingx.io Subject: RE: Cannot configure controller-0 on subcloud after bootstrapping from central cloud [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Barton, Another question: The last step of the distributed cloud guide says to add a route from the subcloud management network to the central cloud management network. Is this done AFTER I unlock the subcloud active controller or before? I'm approaching the end and want to make sure I do everything correct. Thanks Owen From: Wensley, Barton Sent: Thursday, January 28, 2021 8:24 AM To: Owen Yuen; starlingx-discuss at lists.starlingx.io Subject: RE: Cannot configure controller-0 on subcloud after bootstrapping from central cloud [External Email] Owen - see my comments below... Bart From: Owen Yuen > Sent: Wednesday, January 27, 2021 10:04 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Cannot configure controller-0 on subcloud after bootstrapping from central cloud [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, I'm a student so please bear with me if my questions are rudimentary. I'm trying to setup a AIO duplex distributed cloud setup with one central cloud and one subcloud. I've managed to bootstrap the subcloud from the central cloud successfully and the next steps says to continue the installation from the AIO duplex guide starting from "Configure controller-0". On step 2, I need to run system commands but all of them require the host to be unlocked. I'm not able to unlock the host either since I'm on the host itself. I assume these steps are done on the subcloud controller-0 correct? [Bart] Yes - these steps are done on the subcloud controller-0. They are done BEFORE you unlock the subcloud controller - you will see that following these steps will eventually lead to unlocking the subcloud controller. Any help will be appreciated. Thanks Owen This email contains links to content or websites. Always be cautious when clicking on external links or attachments. If in doubt, please forward suspicious emails to phishing at carleton.ca. -----End of Disclaimer----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Venkata.Veldanda at radisys.com Tue Feb 2 13:26:09 2021 From: Venkata.Veldanda at radisys.com (Venkata Ramana Veldanda) Date: Tue, 2 Feb 2021 13:26:09 +0000 Subject: [Starlingx-discuss] PTP4L configuration in StarlingX for custom sections Message-ID: Hi We are running an STX4.0/Simplex load and like to understand how we can add custom sections for PTP related configurations We leverage "system service-parameter-add" to modify the parameters of existing sections in /etc/ptp4l.conf (for e.g. global section which is already there) but how do we add a new section. We tried to use "system service-parameter-add" and use the '-resource' field to update this but it doesn't get reflected in ptp4l.conf (even after running the apply command) For e.g. we want to add a section like below in bold below global] dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 #maxStepsRemoved 255 logAnnounceInterval -3 logSyncInterval 0 logMinDelayReqInterval 0 masterOnly 0 slaveOnly 1 G.8275.portDS.localPriority 128 hybrid_e2e 1 inhibit_multicast_service 1 unicast_listen 1 unicast_req_duration 60 domainNumber 44 step_threshold 0.0 first_step_threshold 0.00002 #announceReceiptTimeout 8 tx_timestamp_timeout 50 # # Customize the following for slave operation: # [unicast_master_table] table_id 1 logQueryInterval 2 UDPv4 192.168.2.100 # [enp134s0f1] unicast_master_table 1 network_transport UDPv4 We tried this with no luck (although it gets added to system service-parameter-list) 2021-02-02T18:47:23.000 controller-0 -sh: info HISTORY: PID=438513 UID=42425 system service-parameter-add ptp enp134s0f1 unicast_master_table=1 --resource enp134s0f1 Venkata Veldanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ankush.Rai at commscope.com Tue Feb 2 14:31:54 2021 From: Ankush.Rai at commscope.com (Rai, Ankush) Date: Tue, 2 Feb 2021 14:31:54 +0000 Subject: [Starlingx-discuss] Central cloud hosting in VM environment In-Reply-To: References: Message-ID: Hi Bart, Thanks for responding to my query. Actually I have one more question, Can we utilize the AWS bare-metal option for same? Regards, Ankush From: Wensley, Barton Sent: Wednesday, January 27, 2021 6:09 PM To: Rai, Ankush ; starlingx-discuss at lists.starlingx.io Subject: RE: Central cloud hosting in VM environment Ankush, Starlingx does not support hosting a central cloud in the AWS environment. Bart ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ External (barton.wensley at windriver.com) Report This Email FAQ Protection by INKY Ankush, Starlingx does not support hosting a central cloud in the AWS environment. Bart From: Rai, Ankush > Sent: Wednesday, January 27, 2021 5:04 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Central cloud hosting in VM environment [Please note this e-mail is from an EXTERNAL e-mail address] Please reply on this. From: Rai, Ankush Sent: Thursday, January 21, 2021 11:21 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Central cloud hosting in VM environment We are thinking to host the central cloud in the AWS environment. Please respond if this is supported by starlingx. Thanks, Ankush From: Rai, Ankush Sent: Wednesday, January 20, 2021 5:18 PM To: starlingx-discuss at lists.starlingx.io Subject: Central cloud hosting in VM environment Hi, Is this possible to host the central cloud in the VM environment? We are looking to go for, Virtual All-in-one Duplex or Virtual Standard with Controller Storage based installation for central cloud. But the sub-cloud will be Bare metal based. Regards, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Tue Feb 2 14:55:42 2021 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Tue, 2 Feb 2021 14:55:42 +0000 Subject: [Starlingx-discuss] Central cloud hosting in VM environment In-Reply-To: References: Message-ID: Ankush, I don't know if that would work. Maybe someone on the list has attempted that and can provide more information? Bart From: Rai, Ankush Sent: Tuesday, February 2, 2021 9:32 AM To: Wensley, Barton ; starlingx-discuss at lists.starlingx.io Subject: RE: Central cloud hosting in VM environment [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Bart, Thanks for responding to my query. Actually I have one more question, Can we utilize the AWS bare-metal option for same? Regards, Ankush From: Wensley, Barton > Sent: Wednesday, January 27, 2021 6:09 PM To: Rai, Ankush >; starlingx-discuss at lists.starlingx.io Subject: RE: Central cloud hosting in VM environment Ankush, Starlingx does not support hosting a central cloud in the AWS environment. Bart From: Rai, Ankush > Sent: Wednesday, January 27, 2021 5:04 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Central cloud hosting in VM environment [Please note this e-mail is from an EXTERNAL e-mail address] Please reply on this. From: Rai, Ankush Sent: Thursday, January 21, 2021 11:21 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Central cloud hosting in VM environment We are thinking to host the central cloud in the AWS environment. Please respond if this is supported by starlingx. Thanks, Ankush From: Rai, Ankush Sent: Wednesday, January 20, 2021 5:18 PM To: starlingx-discuss at lists.starlingx.io Subject: Central cloud hosting in VM environment Hi, Is this possible to host the central cloud in the VM environment? We are looking to go for, Virtual All-in-one Duplex or Virtual Standard with Controller Storage based installation for central cloud. But the sub-cloud will be Bare metal based. Regards, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From Saul.Wold at windriver.com Tue Feb 2 15:03:17 2021 From: Saul.Wold at windriver.com (Wold, Saul) Date: Tue, 2 Feb 2021 15:03:17 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX non-OpenStack distro meeting Message-ID: I need to cancel the meeting due to a conflict, I will also miss the TSC/Community meeting. I continue to work on the CentOS alternative Specification and will post it soon. Sau! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 6411 bytes Desc: not available URL: From Bill.Zvonar at windriver.com Tue Feb 2 18:39:57 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 2 Feb 2021 18:39:57 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Feb 3, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community calls coming up tomorrow. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210203T1500 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From alexandru.dimofte at intel.com Tue Feb 2 22:19:30 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 2 Feb 2021 22:19:30 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210202T000311Z Message-ID: Sanity Test from 2021-February-02 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210202T000311Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210202T000311Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From sharath.kumar at intel.com Wed Feb 3 10:44:12 2021 From: sharath.kumar at intel.com (Kumar, Sharath) Date: Wed, 3 Feb 2021 10:44:12 +0000 Subject: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) Message-ID: Hi All, We are seeing stx-openstack Armanda helm chart getting failed during apply. Initial docker images are not downloading and throwing error. It was working till last night and since morning am seeing this issue. Kindly fix the issue and do the needful. Thank you in advance. Regards, Sharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From haridhar.kalvala at intel.com Wed Feb 3 11:09:04 2021 From: haridhar.kalvala at intel.com (Kalvala, Haridhar) Date: Wed, 3 Feb 2021 11:09:04 +0000 Subject: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) In-Reply-To: References: Message-ID: Even I am observing the same since morning (IST time zone). Error log: 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp app_applied = self._app.perform_app_apply(rpc_app, mode, lifecycle_hook_info_app_apply) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1995, in perform_app_apply 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp name=app.name, version=app.version, reason=e) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp KubeAppApplyFailure: Deployment of application stx-openstack (1.0-49-centos-stable-versioned) failed: (404) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp Reason: Not Found 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response headers: HTTPHeaderDict({'Date': 'Wed, 03 Feb 2021 18:37:19 GMT', 'Content-Length': '210', 'Content-Type': 'application/json'}) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"ceph-pool-kube-rbd\" not found","reason":"NotFound","details":{"name":"ceph-pool-kube-rbd ","kind":"secrets"},"code":404} 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp Regards, Haridhar Kalvala From: Kumar, Sharath Sent: Wednesday, February 3, 2021 4:14 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) Hi All, We are seeing stx-openstack Armanda helm chart getting failed during apply. Initial docker images are not downloading and throwing error. It was working till last night and since morning am seeing this issue. Kindly fix the issue and do the needful. Thank you in advance. Regards, Sharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Feb 3 12:01:36 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 3 Feb 2021 13:01:36 +0100 Subject: [Starlingx-discuss] Speaking opportunity at Edge Computing World Europe In-Reply-To: <586108B0-0704-4397-B1EA-EDE79B5D6CE7@gmail.com> References: <586108B0-0704-4397-B1EA-EDE79B5D6CE7@gmail.com> Message-ID: <50DC8757-FAE8-42D0-A733-2BA80FD98704@gmail.com> Hi StarlingX Community, It is a friendly reminder about the speaking opportunity at the upcoming Edge Computing World Europe conference. If you are interested in presenting, please reach out to me with session proposals __until the end of this Friday (February 5)__. Please let me know if you have any questions. Thanks, Ildikó > On Jan 26, 2021, at 11:07, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > I’m reaching out to you about a speaking opportunity at the upcoming Edge Computing World Europe event that will be held March 9-11, 2021 in a virtual format. > > There is a 20-minute long slot that is available to grab if anyone would be interested in having an overview presentation about the project or to talk about any interesting feature or functionality. > > If you would like to present please reach out to me as soon as possible to discuss the details. > > Please let me know if you have any questions. > > Thanks and Best Regards, > Ildikó > > From bruce.e.jones at intel.com Wed Feb 3 15:23:12 2021 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 3 Feb 2021 15:23:12 +0000 Subject: [Starlingx-discuss] StarlingX TSC minutes from Feb 3 2021 Message-ID: Feb 3 2021 * Standing Topics * One open spec : https://review.opendev.org/c/starlingx/specs/+/772079 * [HSC] - Booting Controller-1 over VLAN PXE * trying to leverage an Ironic network to boot StarlingX nodes. * Not sure that PXE boot can be done over a VLAN tagged network (at all). ? Servers they are using support PXE booting over VLAN in the BIOS. * They can see the host in the host list and give it a personality, but then they get an error on the TFTP file transfer * Please send the logs / error messages to the list so the Networking experts in StarlingX can wiegh in * [HSC] - Queries Related to Storage Node: * Can we integrate a pre-deployed Ceph storage cluster (already created by us) with the existing StarlingX Dedicated Storage Model, rather than StarlingX deploying its own Storage nodes. ? There used to be support in the pre-open source version of StarlingX to support external Ceph clusters but we're not sure that support was released as open source. ? We don't think it's feasible to support external storage clusters joining a StarlingX storage cluster but it could be a new feature to support an external Ceph cluster for a future release. * Is it possible to have a storage network which is separate from mgmt+cluster networks? So that all the storage traffic passes only through that network ? This is also a feasible feature that was previously supported but then dropped due to lack of user interest. It could be re-implemented in a future release * Is there a way to separate client traffic from Storage cluster traffic (running on Mgmt+Cluster Host NW) if we enable Ceph Metadata server on Controller nodes? * What is the upgrade path in a Controller Storage Model :- Will storage data get lost or will it be preserved while Upgrading? ? There is no support in the StarlingX open source project for software upgrades. There are commercial software providers who deliver and support software upgrades e.g. Wind River. * Can someone please take a look at bug 1913043 "StarlingX Provision-Setup Error"? * https://bugs.launchpad.net/starlingx/+bug/1913043 -----Original Appointment----- From: Zvonar, Bill Sent: Wednesday, October 7, 2020 7:57 AM To: Zvonar, Bill; starlingx-discuss at lists.starlingx.io Cc: Wensley, Barton; Jones, Bruce E Subject: StarlingX TSC & Community Call When: Wednesday, February 3, 2021 10:00 AM-11:00 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Feb 3 15:40:43 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 3 Feb 2021 15:40:43 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Feb 3, 2021) In-Reply-To: References: Message-ID: * Standing Topics * Sanity * back to Green, woohoo * issues today with Helm-chart apply failing on the 4.0.1 release * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010758.html * per Scott, we may want to re-sanitize 4.0.1, Nicolae will check (Sanjay will send info re: exact version) * Gerrit Reviews in Need of Attention * https://review.opendev.org/c/starlingx/openstack-armada-app/+/768388 * Topics for this Week * nothing this week * ARs from Previous Meetings * Bruce/Austin - spec authorship re: RHEL DUP - pending * Bill/Nicolae/Scott - docker hub pro account - pending * Open Requests for Help * HTTPInternalServerError * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-January/010714.html * no updates * N3000 FPGA image update doesn't complete * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-January/010686.html * no updates * [Issue in StarlingX 4.0] Deploying STX on a VLAN Based Network * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-January/010697.html * no updates * PTP4L configuration in StarlingX for custom sections * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010752.html * no updates * Build Matters (if required) * nothing this week -----Original Message----- From: Zvonar, Bill Sent: Tuesday, February 2, 2021 1:40 PM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (Feb 3, 2021) Hi all, reminder of the weekly TSC/Community calls coming up tomorrow. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210203T1500 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From David.Sullivan at windriver.com Wed Feb 3 16:12:56 2021 From: David.Sullivan at windriver.com (Sullivan, David) Date: Wed, 3 Feb 2021 16:12:56 +0000 Subject: [Starlingx-discuss] PTP4L configuration in StarlingX for custom sections In-Reply-To: References: Message-ID: Hi Venkata, That functionality isn't available at this time. The ptp service-parameter values are only used to populate the global section of the ptp4l conf. The service-parameter values are system wide, but the ptp interface configurations are host specific, particularly in a multinode system. David From: Venkata Ramana Veldanda Sent: Tuesday, February 2, 2021 8:26 AM To: starlingx-discuss at lists.starlingx.io Cc: Srinivas Sadagopan Subject: [Starlingx-discuss] PTP4L configuration in StarlingX for custom sections [Please note: This e-mail is from an EXTERNAL e-mail address] Hi We are running an STX4.0/Simplex load and like to understand how we can add custom sections for PTP related configurations We leverage "system service-parameter-add" to modify the parameters of existing sections in /etc/ptp4l.conf (for e.g. global section which is already there) but how do we add a new section. We tried to use "system service-parameter-add" and use the '-resource' field to update this but it doesn't get reflected in ptp4l.conf (even after running the apply command) For e.g. we want to add a section like below in bold below global] dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 #maxStepsRemoved 255 logAnnounceInterval -3 logSyncInterval 0 logMinDelayReqInterval 0 masterOnly 0 slaveOnly 1 G.8275.portDS.localPriority 128 hybrid_e2e 1 inhibit_multicast_service 1 unicast_listen 1 unicast_req_duration 60 domainNumber 44 step_threshold 0.0 first_step_threshold 0.00002 #announceReceiptTimeout 8 tx_timestamp_timeout 50 # # Customize the following for slave operation: # [unicast_master_table] table_id 1 logQueryInterval 2 UDPv4 192.168.2.100 # [enp134s0f1] unicast_master_table 1 network_transport UDPv4 We tried this with no luck (although it gets added to system service-parameter-list) 2021-02-02T18:47:23.000 controller-0 -sh: info HISTORY: PID=438513 UID=42425 system service-parameter-add ptp enp134s0f1 unicast_master_table=1 --resource enp134s0f1 Venkata Veldanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Feb 3 18:10:59 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 3 Feb 2021 13:10:59 -0500 (EST) Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images - Build # 288 - Still Failing! In-Reply-To: <1001909126.980.1610973444283.JavaMail.javamailuser@localhost> References: <1001909126.980.1610973444283.JavaMail.javamailuser@localhost> Message-ID: <1392569599.87.1612375860502.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 288 Status: Still Failing Timestamp: 20210201T084725Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210201T053000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20210201T053000Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210201T053000Z/logs MASTER_BUILD_NUMBER: 815 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210201T053000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/monolithic PUBLISH_TIMESTAMP: 20210201T053000Z DOCKER_BUILD_ID: jenkins-master-20210201T053000Z-builder TIMESTAMP: 20210201T053000Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210201T053000Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210201T053000Z/outputs From build.starlingx at gmail.com Wed Feb 3 18:11:02 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 3 Feb 2021 13:11:02 -0500 (EST) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 815 - Failure! Message-ID: <437617364.90.1612375863228.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 815 Status: Failure Timestamp: 20210201T053000Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210201T053000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From maryx.camp at intel.com Wed Feb 3 21:13:00 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 3 Feb 2021 21:13:00 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 03-Feb-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 03-Feb-21 All -- reviews merged since last meeting: 5 All -- bug status -- 17 total - team agrees to defer all low priority LP until the upstreaming effort is completed. 13 LP are WIP against API documentation, which is generated from source code (low priority). Those reviews are here: https://review.opendev.org/#/q/project:starlingx/config Status/questions/opens Kudos to Juanita for her perseverance when submitting the SNMPv3 doc review: https://review.opendev.org/c/starlingx/docs/+/773544 Next steps are to wait for +1 from the reviewers, then Juanita will assign to the Test team using LaunchPad. Kudos to Ron for implementing the htmlChecks script. This will run every time you run "tox -e docs" to build the docs. It is not a gate to pushing a review to the repo. It provides a warning if there are formatting issues in the HTML file, such as the "grey bar" that is shown when there are indent issues. Nothing to report this week on the version drop-down work. From build.starlingx at gmail.com Wed Feb 3 23:54:05 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 3 Feb 2021 18:54:05 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1511 - Failure! Message-ID: <1423911793.94.1612396446088.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1511 Status: Failure Timestamp: 20210203T233313Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210203T231710Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210203T231710Z DOCKER_BUILD_ID: jenkins-master-distro-20210203T231710Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210203T231710Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210203T231710Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/distro From haridhar.kalvala at intel.com Thu Feb 4 03:16:16 2021 From: haridhar.kalvala at intel.com (Kalvala, Haridhar) Date: Thu, 4 Feb 2021 03:16:16 +0000 Subject: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) In-Reply-To: References: Message-ID: Hello All, Any update/comment on below issue. It still persists with any version stx-openstack app(Ex: stx-openstack-1.0-49-centos-stable-versioned.tgz) ? Thank you, Haridhar Kalvala Pasting error log again : ksysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | False | | app_version | 1.0-49-centos-stable-versioned | | created_at | 2021-02-04T11:08:55.408108+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2021-02-04T11:09:20.815085+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. [sysadmin at controller-0 ~(keystone_admin)]$ tail -f /var/log/sysinv.log 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp app_applied = self._app.perform_app_apply(rpc_app, mode, lifecycle_hook_info_app_apply) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1995, in perform_app_apply 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp name=app.name, version=app.version, reason=e) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp KubeAppApplyFailure: Deployment of application stx-openstack (1.0-49-centos-stable-versioned) failed: (404) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp Reason: Not Found 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response headers: HTTPHeaderDict({'Date': 'Thu, 04 Feb 2021 11:09:38 GMT', 'Content-Length': '210', 'Content-Type': 'application/json'}) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"ceph-pool-kube-rbd\" not found","reason":"NotFound","details":{"name":"ceph-pool-kube-rbd","kind":"secrets"},"code":404} 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp sysinv 2021-02-04 11:10:20.070 99042 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None sysinv 2021-02-04 11:10:20.071 99042 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'status': u'success', u'in-progress': None, u'sw-update-type': None} sysinv 2021-02-04 11:10:20.117 99042 INFO sysinv.conductor.kube_app [-] lifecycle hook for application platform-integ-apps (1.0-9) started {'lifecycle_type': u'check', 'relative_timing': u'pre', 'mode': u'auto', 'operation': u'apply', 'extra': {}}. sysinv 2021-02-04 11:10:20.117 99042 INFO sysinv.conductor.manager [-] Auto-apply failed prerequisites for platform-integ-apps: Automatic apply is disabled for platform-integ-apps. sysinv 2021-02-04 11:10:20.122 99042 INFO sysinv.conductor.kube_app [-] lifecycle hook for application oidc-auth-apps (1.0-29) started {'lifecycle_type': u'check', 'relative_timing': u'pre', 'mode': u'auto', 'operation': u'apply', 'extra': {}}. sysinv 2021-02-04 11:10:20.123 99042 INFO sysinv.conductor.manager [-] Auto-apply failed prerequisites for oidc-auth-apps: Automatic apply is disabled for oidc-auth-apps. sysinv 2021-02-04 11:10:20.131 99042 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None sysinv 2021-02-04 11:10:20.132 99042 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'status': u'success', u'in-progress': None, u'sw-update-type': None} From: Kalvala, Haridhar Sent: Wednesday, February 3, 2021 4:39 PM To: Kumar, Sharath ; starlingx-discuss at lists.starlingx.io Subject: RE: Default Helm-chart apply failing (4.0.1) Even I am observing the same since morning (IST time zone). Error log: 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp app_applied = self._app.perform_app_apply(rpc_app, mode, lifecycle_hook_info_app_apply) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1995, in perform_app_apply 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp name=app.name, version=app.version, reason=e) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp KubeAppApplyFailure: Deployment of application stx-openstack (1.0-49-centos-stable-versioned) failed: (404) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp Reason: Not Found 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response headers: HTTPHeaderDict({'Date': 'Wed, 03 Feb 2021 18:37:19 GMT', 'Content-Length': '210', 'Content-Type': 'application/json'}) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"ceph-pool-kube-rbd\" not found","reason":"NotFound","details":{"name":"ceph-pool-kube-rbd ","kind":"secrets"},"code":404} 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp Regards, Haridhar Kalvala From: Kumar, Sharath > Sent: Wednesday, February 3, 2021 4:14 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) Hi All, We are seeing stx-openstack Armanda helm chart getting failed during apply. Initial docker images are not downloading and throwing error. It was working till last night and since morning am seeing this issue. Kindly fix the issue and do the needful. Thank you in advance. Regards, Sharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sriram.Dharwadkar at commscope.com Thu Feb 4 12:08:27 2021 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Thu, 4 Feb 2021 12:08:27 +0000 Subject: [Starlingx-discuss] Procedure to update k8s_root_ca_cert and k8s_root_ca_key in StarlingX-4.0 In-Reply-To: References: Message-ID: Hi Greg, Thanks for the reply. Another question on the same topic. For k8s_root_ca_cert and k8s_root_ca_key, instead of giving self signed certificate and key, is it possible to give the key and certificate signed by a third party CA. If yes, how to give third party CA certificate as an input to installation (in localhost.yaml?). Regards, Sriram From: Waines, Greg Sent: Friday, January 29, 2021 6:38 PM To: Dharwadkar, Sriram ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Procedure to update k8s_root_ca_cert and k8s_root_ca_key in StarlingX-4.0 The upstream procedure is here: https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/ Greg. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ External (greg.waines at windriver.com) Report This Email FAQ Protection by INKY The upstream procedure is here: https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/ Greg. From: "Dharwadkar, Sriram" > Date: Friday, January 29, 2021 at 6:20 AM To: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Procedure to update k8s_root_ca_cert and k8s_root_ca_key in StarlingX-4.0 [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, We are using Distributed StarlingX-4.0. As per the documentation https://docs.starlingx.io/deploy_install_guides/r4_release/ansible_bootstrap_configs.html k8s_root_ca_cert and k8s_root_ca_key are install time only parameters. Documentation says updating k8s_root_ca_cert and k8s_root_ca_key is an involved process. Can you please share the procedure for updating root_ca and root_key ? We have a usecase to update this cert and key during runtime, with reboot it should take new root_ca and root_key, is this possible ? Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu Feb 4 14:04:52 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 4 Feb 2021 14:04:52 +0000 Subject: [Starlingx-discuss] Procedure to update k8s_root_ca_cert and k8s_root_ca_key in StarlingX-4.0 In-Reply-To: References: Message-ID: Yes in localhost.yaml. https://docs.starlingx.io/deploy_install_guides/r5_release/ansible_bootstrap_configs.html#kubernetes-root-ca-certificate-and-key Greg. From: "Dharwadkar, Sriram" Date: Thursday, February 4, 2021 at 7:08 AM To: Greg Waines , "starlingx-discuss at lists.starlingx.io" , "Rai, Ankush" Subject: RE: [Starlingx-discuss] Procedure to update k8s_root_ca_cert and k8s_root_ca_key in StarlingX-4.0 [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Greg, Thanks for the reply. Another question on the same topic. For k8s_root_ca_cert and k8s_root_ca_key, instead of giving self signed certificate and key, is it possible to give the key and certificate signed by a third party CA. If yes, how to give third party CA certificate as an input to installation (in localhost.yaml?). Regards, Sriram From: Waines, Greg Sent: Friday, January 29, 2021 6:38 PM To: Dharwadkar, Sriram ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Procedure to update k8s_root_ca_cert and k8s_root_ca_key in StarlingX-4.0 The upstream procedure is here: https://kubernetes.io/docs/tasks/tls/manual-rotation-of-ca-certificates/ Greg. From: "Dharwadkar, Sriram" > Date: Friday, January 29, 2021 at 6:20 AM To: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Procedure to update k8s_root_ca_cert and k8s_root_ca_key in StarlingX-4.0 [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, We are using Distributed StarlingX-4.0. As per the documentation https://docs.starlingx.io/deploy_install_guides/r4_release/ansible_bootstrap_configs.html k8s_root_ca_cert and k8s_root_ca_key are install time only parameters. Documentation says updating k8s_root_ca_cert and k8s_root_ca_key is an involved process. Can you please share the procedure for updating root_ca and root_key ? We have a usecase to update this cert and key during runtime, with reboot it should take new root_ca and root_key, is this possible ? Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Thu Feb 4 14:51:38 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Thu, 4 Feb 2021 14:51:38 +0000 Subject: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) In-Reply-To: References: Message-ID: Hi All, We installed the 4.0.1 from (http://mirror.starlingx.cengn.ca/mirror/starlingx/release/4.0.1/centos/flock/outputs/) on a Virtual Standard configuration in two scenarios: downloading the images from internet and using our mirror registry that already has the images. We managed to fully install and execute sanity on both cases. We are not able to replicate your issue. Regards, Nicolae Jascanu, Ph.D. Software Engineer IOTG Galati, Romania From: Kalvala, Haridhar Sent: Thursday, February 4, 2021 05:16 To: starlingx-discuss at lists.starlingx.io Cc: Kumar, Sharath Subject: Re: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) Hello All, Any update/comment on below issue. It still persists with any version stx-openstack app(Ex: stx-openstack-1.0-49-centos-stable-versioned.tgz) ? Thank you, Haridhar Kalvala Pasting error log again : ksysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | False | | app_version | 1.0-49-centos-stable-versioned | | created_at | 2021-02-04T11:08:55.408108+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2021-02-04T11:09:20.815085+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. [sysadmin at controller-0 ~(keystone_admin)]$ tail -f /var/log/sysinv.log 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp app_applied = self._app.perform_app_apply(rpc_app, mode, lifecycle_hook_info_app_apply) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1995, in perform_app_apply 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp name=app.name, version=app.version, reason=e) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp KubeAppApplyFailure: Deployment of application stx-openstack (1.0-49-centos-stable-versioned) failed: (404) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp Reason: Not Found 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response headers: HTTPHeaderDict({'Date': 'Thu, 04 Feb 2021 11:09:38 GMT', 'Content-Length': '210', 'Content-Type': 'application/json'}) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"ceph-pool-kube-rbd\" not found","reason":"NotFound","details":{"name":"ceph-pool-kube-rbd","kind":"secrets"},"code":404} 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp sysinv 2021-02-04 11:10:20.070 99042 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None sysinv 2021-02-04 11:10:20.071 99042 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'status': u'success', u'in-progress': None, u'sw-update-type': None} sysinv 2021-02-04 11:10:20.117 99042 INFO sysinv.conductor.kube_app [-] lifecycle hook for application platform-integ-apps (1.0-9) started {'lifecycle_type': u'check', 'relative_timing': u'pre', 'mode': u'auto', 'operation': u'apply', 'extra': {}}. sysinv 2021-02-04 11:10:20.117 99042 INFO sysinv.conductor.manager [-] Auto-apply failed prerequisites for platform-integ-apps: Automatic apply is disabled for platform-integ-apps. sysinv 2021-02-04 11:10:20.122 99042 INFO sysinv.conductor.kube_app [-] lifecycle hook for application oidc-auth-apps (1.0-29) started {'lifecycle_type': u'check', 'relative_timing': u'pre', 'mode': u'auto', 'operation': u'apply', 'extra': {}}. sysinv 2021-02-04 11:10:20.123 99042 INFO sysinv.conductor.manager [-] Auto-apply failed prerequisites for oidc-auth-apps: Automatic apply is disabled for oidc-auth-apps. sysinv 2021-02-04 11:10:20.131 99042 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None sysinv 2021-02-04 11:10:20.132 99042 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'status': u'success', u'in-progress': None, u'sw-update-type': None} From: Kalvala, Haridhar Sent: Wednesday, February 3, 2021 4:39 PM To: Kumar, Sharath >; starlingx-discuss at lists.starlingx.io Subject: RE: Default Helm-chart apply failing (4.0.1) Even I am observing the same since morning (IST time zone). Error log: 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp app_applied = self._app.perform_app_apply(rpc_app, mode, lifecycle_hook_info_app_apply) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1995, in perform_app_apply 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp name=app.name, version=app.version, reason=e) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp KubeAppApplyFailure: Deployment of application stx-openstack (1.0-49-centos-stable-versioned) failed: (404) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp Reason: Not Found 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response headers: HTTPHeaderDict({'Date': 'Wed, 03 Feb 2021 18:37:19 GMT', 'Content-Length': '210', 'Content-Type': 'application/json'}) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"ceph-pool-kube-rbd\" not found","reason":"NotFound","details":{"name":"ceph-pool-kube-rbd ","kind":"secrets"},"code":404} 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp Regards, Haridhar Kalvala From: Kumar, Sharath > Sent: Wednesday, February 3, 2021 4:14 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) Hi All, We are seeing stx-openstack Armanda helm chart getting failed during apply. Initial docker images are not downloading and throwing error. It was working till last night and since morning am seeing this issue. Kindly fix the issue and do the needful. Thank you in advance. Regards, Sharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaos at neusoft.com Thu Feb 4 15:44:03 2021 From: zhaos at neusoft.com (zs) Date: Thu, 4 Feb 2021 23:44:03 +0800 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210202T000311Z In-Reply-To: References: Message-ID: Hi Nick & Alex: Thank you so much for your team provide such a timely green version of the STX mirror. Recently, we are looking forward to the community Green version almost every day.(^_^) Thank you very much for your hard work. 👍 We used this green version to carry out deployment with Rook & Ceph feature and creation of VM instances, VM system runs smoothly and we have got very good results:The SimpleX(VM) has been deployed & Test Passed. Other workloads are still being tested. If there are other good news, we will share with the community. Thanks to every dedicated expert in the community, You have work so hard. Thank you! Best Regards ―――――――― Shuai 发件人: "Dimofte, Alexandru" 日期: 2021年2月3日 星期三 上午6:19 收件人: "starlingx-discuss at lists.starlingx.io" 主题: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210202T000311Z Sanity Test from 2021-February-02 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210202T000311Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210202T000311Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient,unauthorized use,forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8409 bytes Desc: not available URL: From build.starlingx at gmail.com Thu Feb 4 21:03:43 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 4 Feb 2021 16:03:43 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1515 - Failure! Message-ID: <1962529973.98.1612472624624.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1515 Status: Failure Timestamp: 20210204T205920Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20210204T204525Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-compiler/20210204T204525Z DOCKER_BUILD_ID: jenkins-master-compiler-20210204T204525Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-compiler/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20210204T204525Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/compiler/20210204T204525Z/logs MASTER_JOB_NAME: STX_build_layer_compiler_master_master LAYER: compiler MY_REPO_ROOT: /localdisk/designer/jenkins/master-compiler PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/compiler From build.starlingx at gmail.com Thu Feb 4 21:37:33 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 4 Feb 2021 16:37:33 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1516 - Still Failing! In-Reply-To: <1669895227.96.1612472622104.JavaMail.javamailuser@localhost> References: <1669895227.96.1612472622104.JavaMail.javamailuser@localhost> Message-ID: <531331998.101.1612474654497.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1516 Status: Still Failing Timestamp: 20210204T211816Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210204T210321Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210204T210321Z DOCKER_BUILD_ID: jenkins-master-distro-20210204T210321Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210204T210321Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210204T210321Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/distro From haochuan.z.chen at intel.com Fri Feb 5 01:35:24 2021 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Fri, 5 Feb 2021 01:35:24 +0000 Subject: [Starlingx-discuss] doc update for rook-ceph Message-ID: HI https://review.opendev.org/c/starlingx/docs/+/751158/12 I update doc for rook deployment, please review. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From haridhar.kalvala at intel.com Fri Feb 5 05:48:42 2021 From: haridhar.kalvala at intel.com (Kalvala, Haridhar) Date: Fri, 5 Feb 2021 05:48:42 +0000 Subject: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) In-Reply-To: References: Message-ID: Hi Nic & All, We(Me, Sanjay & Sharath) have taken image from link you shared below and still facing same issue. I am not sure about type of error (it was working fine until 2 days back), Just a guess is it anywhere related to local Proxy setting ? Any input/suggestions would help. [sysadmin at controller-0 ~(keystone_admin)]$ tail -f /var/log/sysinv.log 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp app_applied = self._app.perform_app_apply(rpc_app, mode, lifecycle_hook_info_app_apply) 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1995, in perform_app_apply 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp name=app.name, version=app.version, reason=e) 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp KubeAppApplyFailure: Deployment of application stx-openstack (1.0-49-centos-stable-versioned) failed: (404) 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp Reason: Not Found 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp HTTP response headers: HTTPHeaderDict({'Date': 'Fri, 05 Feb 2021 13:40:32 GMT', 'Content-Length': '210', 'Content-Type': 'application/json'}) 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"ceph-pool-kube-rbd\" not found","reason":"NotFound","details":{"name":"ceph-pool-kube-rbd ","kind":"secrets"},"code":404} 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp 2021-02-05 13:40:32.592 93081 ERROR sysinv.openstack.common.rpc.amqp Regards, Haridhar Kalvala From: Jascanu, Nicolae Sent: Thursday, February 4, 2021 8:22 PM To: Kalvala, Haridhar ; starlingx-discuss at lists.starlingx.io Cc: Kumar, Sharath ; Mukherjee, Sanjay K Subject: RE: Default Helm-chart apply failing (4.0.1) Hi All, We installed the 4.0.1 from (http://mirror.starlingx.cengn.ca/mirror/starlingx/release/4.0.1/centos/flock/outputs/) on a Virtual Standard configuration in two scenarios: downloading the images from internet and using our mirror registry that already has the images. We managed to fully install and execute sanity on both cases. We are not able to replicate your issue. Regards, Nicolae Jascanu, Ph.D. Software Engineer IOTG Galati, Romania From: Kalvala, Haridhar > Sent: Thursday, February 4, 2021 05:16 To: starlingx-discuss at lists.starlingx.io Cc: Kumar, Sharath > Subject: Re: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) Hello All, Any update/comment on below issue. It still persists with any version stx-openstack app(Ex: stx-openstack-1.0-49-centos-stable-versioned.tgz) ? Thank you, Haridhar Kalvala Pasting error log again : ksysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | False | | app_version | 1.0-49-centos-stable-versioned | | created_at | 2021-02-04T11:08:55.408108+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2021-02-04T11:09:20.815085+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. [sysadmin at controller-0 ~(keystone_admin)]$ tail -f /var/log/sysinv.log 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp app_applied = self._app.perform_app_apply(rpc_app, mode, lifecycle_hook_info_app_apply) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1995, in perform_app_apply 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp name=app.name, version=app.version, reason=e) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp KubeAppApplyFailure: Deployment of application stx-openstack (1.0-49-centos-stable-versioned) failed: (404) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp Reason: Not Found 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response headers: HTTPHeaderDict({'Date': 'Thu, 04 Feb 2021 11:09:38 GMT', 'Content-Length': '210', 'Content-Type': 'application/json'}) 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"ceph-pool-kube-rbd\" not found","reason":"NotFound","details":{"name":"ceph-pool-kube-rbd","kind":"secrets"},"code":404} 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-04 11:09:39.231 99042 ERROR sysinv.openstack.common.rpc.amqp sysinv 2021-02-04 11:10:20.070 99042 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None sysinv 2021-02-04 11:10:20.071 99042 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'status': u'success', u'in-progress': None, u'sw-update-type': None} sysinv 2021-02-04 11:10:20.117 99042 INFO sysinv.conductor.kube_app [-] lifecycle hook for application platform-integ-apps (1.0-9) started {'lifecycle_type': u'check', 'relative_timing': u'pre', 'mode': u'auto', 'operation': u'apply', 'extra': {}}. sysinv 2021-02-04 11:10:20.117 99042 INFO sysinv.conductor.manager [-] Auto-apply failed prerequisites for platform-integ-apps: Automatic apply is disabled for platform-integ-apps. sysinv 2021-02-04 11:10:20.122 99042 INFO sysinv.conductor.kube_app [-] lifecycle hook for application oidc-auth-apps (1.0-29) started {'lifecycle_type': u'check', 'relative_timing': u'pre', 'mode': u'auto', 'operation': u'apply', 'extra': {}}. sysinv 2021-02-04 11:10:20.123 99042 INFO sysinv.conductor.manager [-] Auto-apply failed prerequisites for oidc-auth-apps: Automatic apply is disabled for oidc-auth-apps. sysinv 2021-02-04 11:10:20.131 99042 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None sysinv 2021-02-04 11:10:20.132 99042 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'status': u'success', u'in-progress': None, u'sw-update-type': None} From: Kalvala, Haridhar Sent: Wednesday, February 3, 2021 4:39 PM To: Kumar, Sharath >; starlingx-discuss at lists.starlingx.io Subject: RE: Default Helm-chart apply failing (4.0.1) Even I am observing the same since morning (IST time zone). Error log: 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp app_applied = self._app.perform_app_apply(rpc_app, mode, lifecycle_hook_info_app_apply) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1995, in perform_app_apply 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp name=app.name, version=app.version, reason=e) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp KubeAppApplyFailure: Deployment of application stx-openstack (1.0-49-centos-stable-versioned) failed: (404) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp Reason: Not Found 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response headers: HTTPHeaderDict({'Date': 'Wed, 03 Feb 2021 18:37:19 GMT', 'Content-Length': '210', 'Content-Type': 'application/json'}) 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"secrets \"ceph-pool-kube-rbd\" not found","reason":"NotFound","details":{"name":"ceph-pool-kube-rbd ","kind":"secrets"},"code":404} 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp 2021-02-03 18:37:19.530 99042 ERROR sysinv.openstack.common.rpc.amqp Regards, Haridhar Kalvala From: Kumar, Sharath > Sent: Wednesday, February 3, 2021 4:14 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Default Helm-chart apply failing (4.0.1) Hi All, We are seeing stx-openstack Armanda helm chart getting failed during apply. Initial docker images are not downloading and throwing error. It was working till last night and since morning am seeing this issue. Kindly fix the issue and do the needful. Thank you in advance. Regards, Sharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Fri Feb 5 06:01:38 2021 From: lists at optimcloud.com (Lists) Date: Fri, 5 Feb 2021 13:01:38 +0700 Subject: [Starlingx-discuss] STX AIO Virtual OAM_IP ? Message-ID: so curious following the guide for STX AIO simplex virtual it states export CONTROLLER0_OAM_CIDR=10.10.10.3/24 export DEFAULT_OAM_GATEWAY=10.10.10.1 sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1 sudo ip link set up dev enp7s1 sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1 are we using this IP / CIDR as the defaults or should we be using the ubuntu VM IP ? its all kind of confusing with AIO since if its a vm inside a vm how does one access the OAM_IF ? ubuntu 16.04 IP 172.24.64.49 controller-0 IP 10.10.10.3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Fri Feb 5 07:51:41 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 5 Feb 2021 07:51:41 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210203T235346Z Message-ID: Sanity Test from 2021-February-03 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210203T235346Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210203T235346Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From Lokendra.Rathour at hsc.com Fri Feb 5 11:57:10 2021 From: Lokendra.Rathour at hsc.com (Lokendra Singh Rathour) Date: Fri, 5 Feb 2021 11:57:10 +0000 Subject: [Starlingx-discuss] [STARLINGX 4.0] APPLICATION UPLOAD GETTING FAILED - with LAG Interface Message-ID: Hello Team, We are trying to setup dedicated Storage setup using StarlingX 4.0 over which we have certain observations/Error during the time of worker Node Configuration for the LAG type Data interfaces. Though we have tweaked the procedure a bit and successfully unlocked the Worker Nodes,we are facing the error in upload the STX Application packages. Steps as followed: STEP 1: We have created bonds for Data Network on Worker Node(reference document: https://docs.starlingx.io/configuration/host_interface_network_config.html#:~:text=When%20a%20host%20is%20added,system%20host%2Dif%2Dadd.) * system host-if-add worker-1 -m 1500 -a active_standby data1bond ae eth1 eth2 * system host-if-add worker-1 -m 1500 -a active_standby data2bond ae eth3 eth4 STEP 2: Then further as per the main documents when Configuring data interfaces for worker nodes ( reference document: https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_install_kubernetes.html) * SPL=/tmp/tmp-system-port-list * SPIL=/tmp/tmp-system-host-if-list Above files does not have the information of the created Bond interfaces for data Nodes, Further in order to get the the value of ${DATA0IFUUID} and ${DATA1IFUUID} below mentioned commands are executed: DATA0IF= DATA1IF= PHYSNET0='physnet0' PHYSNET1='physnet1' SPL=/tmp/tmp-system-port-list SPIL=/tmp/tmp-system-host-if-list # configure the datanetworks in sysinv, prior to referencing it # in the ``system host-if-modify`` command'. system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET1} vlan for NODE in worker-0 worker-1; do echo "Configuring interface for: $NODE" set -ex system host-port-list ${NODE} --nowrap > ${SPL} system host-if-list -a ${NODE} --nowrap > ${SPIL} DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} set +ex done But in our case these values in the variable(${DATA0IFUUID} and ${DATA1IFUUID}) were not getting populated, therefore we have used the UUID received from command : System host-if-list worker-0 System host-if-list worker-1 We observed that UUID of any ethernet interface mentioned in the file (SPIL=/tmp/tmp-system-host-if-list ) matches with the UUID obtained by running the above command(System host-if-list worker-0 ) , so as we did not get the UUID from the file in case LAG Bonds we passed the Values of UUID directly. STEP 3: Using the change, we were able to successfully unlock the worker Nodes. [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | storage-0 | storage | unlocked | enabled | available | | 4 | storage-1 | storage | unlocked | enabled | available | | 5 | worker-0 | worker | unlocked | enabled | available | | 6 | worker-1 | worker | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ STEP 4: Then Further while running the application-upload: system application-upload stx-openstack--centos-stable-versioned.tgz Error as below was seen Error: 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/usr/lib64/python2.7/site-packages/sysinv/helm/helm.py", line 317, in _get_helm_chart_overrides 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app cnamespace)) 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 45, in get_overrides 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app 'hosts': self._get_per_host_overrides() 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 105, in _get_per_host_overrides 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app 'auto_bridge_add': self._get_host_bridges(host)}) 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 141, in _get_host_bridges 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app port_name = self._get_interface_port_name(iface) 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 280, in _get_interface_port_name 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app assert iface['iftype'] == constants.INTERFACE_TYPE_ETHERNET 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app AssertionError 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app Kindly check and help in advising the way forward here w.r.t. as 1. does starlingx support for the bond interface for the Data Networks ? * if yes then do we have any supported document w.r.t the same. Reference Documents: For Deployment: https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_install_kubernetes.html For LAG : : https://docs.starlingx.io/configuration/host_interface_network_config.html#:~:text=When%20a%20host%20is%20added,system%20host%2Dif%2Dadd DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Fri Feb 5 15:37:24 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 5 Feb 2021 15:37:24 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210204T023257Z Message-ID: Sanity Test from 2021-February-04 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210204T023257Z/outputs/iso/ Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210204T023257Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From alexandru.dimofte at intel.com Fri Feb 5 18:26:58 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 5 Feb 2021 18:26:58 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210204T213715Z Message-ID: Sanity Test from 2021-February-04 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210204T213715Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210204T213715Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 8408 bytes Desc: image003.png URL: From susendra.selvaraj at intel.com Mon Feb 8 06:14:41 2021 From: susendra.selvaraj at intel.com (Selvaraj, Susendra) Date: Mon, 8 Feb 2021 06:14:41 +0000 Subject: [Starlingx-discuss] StarlingX TSC minutes from Feb 3 2021 In-Reply-To: References: Message-ID: Hi Bruce, * [HSC] - Booting Controller-1 over VLAN PXE * trying to leverage an Ironic network to boot StarlingX nodes. * Not sure that PXE boot can be done over a VLAN tagged network (at all). ? Servers they are using support PXE booting over VLAN in the BIOS. * They can see the host in the host list and give it a personality, but then they get an error on the TFTP file transfer * Please send the logs / error messages to the list so the Networking experts in StarlingX can wiegh in We did encounter similar issue while setting up Bare Metal AIO Duplex setup. * Its must to ensure network connectivity is proper. * Ping over mgmt. interface is working between the controllers. Regards, Susendra. From: Jones, Bruce E Sent: Wednesday, February 3, 2021 8:53 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX TSC minutes from Feb 3 2021 Feb 3 2021 * Standing Topics * One open spec : https://review.opendev.org/c/starlingx/specs/+/772079 * [HSC] - Booting Controller-1 over VLAN PXE * trying to leverage an Ironic network to boot StarlingX nodes. * Not sure that PXE boot can be done over a VLAN tagged network (at all). ? Servers they are using support PXE booting over VLAN in the BIOS. * They can see the host in the host list and give it a personality, but then they get an error on the TFTP file transfer * Please send the logs / error messages to the list so the Networking experts in StarlingX can wiegh in * [HSC] - Queries Related to Storage Node: * Can we integrate a pre-deployed Ceph storage cluster (already created by us) with the existing StarlingX Dedicated Storage Model, rather than StarlingX deploying its own Storage nodes. ? There used to be support in the pre-open source version of StarlingX to support external Ceph clusters but we're not sure that support was released as open source. ? We don't think it's feasible to support external storage clusters joining a StarlingX storage cluster but it could be a new feature to support an external Ceph cluster for a future release. * Is it possible to have a storage network which is separate from mgmt+cluster networks? So that all the storage traffic passes only through that network ? This is also a feasible feature that was previously supported but then dropped due to lack of user interest. It could be re-implemented in a future release * Is there a way to separate client traffic from Storage cluster traffic (running on Mgmt+Cluster Host NW) if we enable Ceph Metadata server on Controller nodes? * What is the upgrade path in a Controller Storage Model :- Will storage data get lost or will it be preserved while Upgrading? ? There is no support in the StarlingX open source project for software upgrades. There are commercial software providers who deliver and support software upgrades e.g. Wind River. * Can someone please take a look at bug 1913043 "StarlingX Provision-Setup Error"? * https://bugs.launchpad.net/starlingx/+bug/1913043 -----Original Appointment----- From: Zvonar, Bill > Sent: Wednesday, October 7, 2020 7:57 AM To: Zvonar, Bill; starlingx-discuss at lists.starlingx.io Cc: Wensley, Barton; Jones, Bruce E Subject: StarlingX TSC & Community Call When: Wednesday, February 3, 2021 10:00 AM-11:00 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanathukumar14 at gmail.com Mon Feb 8 07:28:15 2021 From: sanathukumar14 at gmail.com (sanath kumar) Date: Mon, 8 Feb 2021 12:58:15 +0530 Subject: [Starlingx-discuss] Help with StarlingX using Raspberry Pi Message-ID: Hello StarlingX Team, I am new to StarlingX. Can somebody help me out with how to get started, maybe by using a simple use case example? Is there anybody who has worked on StarlingX using Raspberry Pi? It would be of great help if anybody could help me out with this. Regards, Sanath Kumar +91 8971546076 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatindra.shashi at intel.com Mon Feb 8 12:00:38 2021 From: yatindra.shashi at intel.com (Shashi, Yatindra) Date: Mon, 8 Feb 2021 12:00:38 +0000 Subject: [Starlingx-discuss] Connection timeout for cengn build repo Message-ID: Hi Team, I try to build docker image (cgcs-root]$ ./build-tools/build-docker-images/build-stx-base.sh) for the base of STX but it fails due to connection timeout to the repos as below. Can someone tell me why repo (http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml ) is not reachable. I tried from the browser and I get error that this site cannot be reached. ------------------- Step 4/5 : COPY stx.repo / ---> 4c8394607ebc Step 5/5 : RUN set -ex ; sed -i '/\[main\]/ atimeout=120' /etc/yum.conf ; mv /stx.repo /etc/yum.repos.d/ ; yum upgrade --disablerepo=* ${REPO_OPTS} -y ; yum install --disablerepo=* ${REPO_OPTS} -y qemu-img openssh-clients python3 python3-pip python3-wheel rh-python36-mod_wsgi ; rm -rf /var/log/* /tmp/* /var/tmp/* ---> Running in 4493c0916b00 + sed -i '/\[main\]/ atimeout=120' /etc/yum.conf + mv /stx.repo /etc/yum.repos.d/ + yum upgrade '--disablerepo=*' --enablerepo=ussuri-ceph --enablerepo=ussuri-wsgi -y Loaded plugins: fastestmirror, ovl Determining fastest mirrors http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 120000 milliseconds') Trying other mirror. http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 120000 milliseconds') Trying other mirror. http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 120000 milliseconds') Trying other mirror. --------------- Mit freundlichen Grüßen/ with best regards, Yatindra Shashi IoTG DE- Intel Corporation Munich, Germany [A close up of a sign Description automatically generated] Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 5821 bytes Desc: image003.png URL: From alexandru.dimofte at intel.com Mon Feb 8 12:31:09 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Mon, 8 Feb 2021 12:31:09 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210206T023259Z Message-ID: Sanity Test from 2021-February-06 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210206T023259Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210206T023259Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From build.starlingx at gmail.com Tue Feb 9 00:20:17 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 8 Feb 2021 19:20:17 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1529 - Failure! Message-ID: <1476257665.106.1612830018684.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1529 Status: Failure Timestamp: 20210209T001544Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20210209T000000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-compiler/20210209T000000Z DOCKER_BUILD_ID: jenkins-master-compiler-20210209T000000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-compiler/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20210209T000000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/compiler/20210209T000000Z/logs MASTER_JOB_NAME: STX_build_layer_compiler_master_master LAYER: compiler MY_REPO_ROOT: /localdisk/designer/jenkins/master-compiler PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/compiler From build.starlingx at gmail.com Tue Feb 9 00:53:23 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 8 Feb 2021 19:53:23 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1530 - Still Failing! In-Reply-To: <1658720218.104.1612830015513.JavaMail.javamailuser@localhost> References: <1658720218.104.1612830015513.JavaMail.javamailuser@localhost> Message-ID: <736536022.109.1612832004077.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1530 Status: Still Failing Timestamp: 20210209T003850Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210209T001957Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210209T001957Z DOCKER_BUILD_ID: jenkins-master-distro-20210209T001957Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210209T001957Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210209T001957Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/distro From lists at optimcloud.com Tue Feb 9 02:13:58 2021 From: lists at optimcloud.com (Lists) Date: Tue, 9 Feb 2021 09:13:58 +0700 Subject: [Starlingx-discuss] HostFs update failed: Not enough free space on cgts-vg Message-ID: did a simplex AIO in a ubuntu vm and im receiving this error trying to deploy openstack on it system host-fs-modify controller-0 docker=60 HostFs update failed: Not enough free space on cgts-vg. Current free space 1 GiB, requested total increase 30 GiB -------------- next part -------------- An HTML attachment was scrubbed... URL: From OwenYuen at cmail.carleton.ca Tue Feb 9 02:25:07 2021 From: OwenYuen at cmail.carleton.ca (Owen Yuen) Date: Tue, 9 Feb 2021 02:25:07 +0000 Subject: [Starlingx-discuss] How to deploy workloads from StarlingX GUI/CLI Message-ID: Is it possible to deploy a workload via STX instead of via kubectl or kubernetes GUI directly? We are running an distributed cloud AIO duplex setup so our worker hosts are on con0 and 1 if that makes a difference. Also, how does StarlingX manage workloads from the GUI? Any help would be greatly appreciated. Thanks Owen -------------- next part -------------- An HTML attachment was scrubbed... URL: From Lokendra.Rathour at hsc.com Tue Feb 9 04:48:31 2021 From: Lokendra.Rathour at hsc.com (Lokendra Singh Rathour) Date: Tue, 9 Feb 2021 04:48:31 +0000 Subject: [Starlingx-discuss] [STARLINGX 4.0] APPLICATION UPLOAD GETTING FAILED - with LAG Interface In-Reply-To: References: Message-ID: Hi Team, Any update with respect the Query raised!! Best Regards, Lokendra From: Lokendra Singh Rathour Sent: Friday, February 5, 2021 5:27 PM To: starlingx-discuss at lists.starlingx.io Subject: [STARLINGX 4.0] APPLICATION UPLOAD GETTING FAILED - with LAG Interface Hello Team, We are trying to setup dedicated Storage setup using StarlingX 4.0 over which we have certain observations/Error during the time of worker Node Configuration for the LAG type Data interfaces. Though we have tweaked the procedure a bit and successfully unlocked the Worker Nodes,we are facing the error in upload the STX Application packages. Steps as followed: STEP 1: We have created bonds for Data Network on Worker Node(reference document: https://docs.starlingx.io/configuration/host_interface_network_config.html#:~:text=When%20a%20host%20is%20added,system%20host%2Dif%2Dadd.) * system host-if-add worker-1 -m 1500 -a active_standby data1bond ae eth1 eth2 * system host-if-add worker-1 -m 1500 -a active_standby data2bond ae eth3 eth4 STEP 2: Then further as per the main documents when Configuring data interfaces for worker nodes ( reference document: https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_install_kubernetes.html) * SPL=/tmp/tmp-system-port-list * SPIL=/tmp/tmp-system-host-if-list Above files does not have the information of the created Bond interfaces for data Nodes, Further in order to get the the value of ${DATA0IFUUID} and ${DATA1IFUUID} below mentioned commands are executed: DATA0IF= DATA1IF= PHYSNET0='physnet0' PHYSNET1='physnet1' SPL=/tmp/tmp-system-port-list SPIL=/tmp/tmp-system-host-if-list # configure the datanetworks in sysinv, prior to referencing it # in the ``system host-if-modify`` command'. system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET1} vlan for NODE in worker-0 worker-1; do echo "Configuring interface for: $NODE" set -ex system host-port-list ${NODE} --nowrap > ${SPL} system host-if-list -a ${NODE} --nowrap > ${SPIL} DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} set +ex done But in our case these values in the variable(${DATA0IFUUID} and ${DATA1IFUUID}) were not getting populated, therefore we have used the UUID received from command : System host-if-list worker-0 System host-if-list worker-1 We observed that UUID of any ethernet interface mentioned in the file (SPIL=/tmp/tmp-system-host-if-list ) matches with the UUID obtained by running the above command(System host-if-list worker-0 ) , so as we did not get the UUID from the file in case LAG Bonds we passed the Values of UUID directly. STEP 3: Using the change, we were able to successfully unlock the worker Nodes. [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | storage-0 | storage | unlocked | enabled | available | | 4 | storage-1 | storage | unlocked | enabled | available | | 5 | worker-0 | worker | unlocked | enabled | available | | 6 | worker-1 | worker | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ STEP 4: Then Further while running the application-upload: system application-upload stx-openstack--centos-stable-versioned.tgz Error as below was seen Error: 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/usr/lib64/python2.7/site-packages/sysinv/helm/helm.py", line 317, in _get_helm_chart_overrides 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app cnamespace)) 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 45, in get_overrides 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app 'hosts': self._get_per_host_overrides() 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 105, in _get_per_host_overrides 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app 'auto_bridge_add': self._get_host_bridges(host)}) 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 141, in _get_host_bridges 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app port_name = self._get_interface_port_name(iface) 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 280, in _get_interface_port_name 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app assert iface['iftype'] == constants.INTERFACE_TYPE_ETHERNET 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app AssertionError 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app Kindly check and help in advising the way forward here w.r.t. as 1. does starlingx support for the bond interface for the Data Networks ? * if yes then do we have any supported document w.r.t the same. Reference Documents: For Deployment: https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_install_kubernetes.html For LAG : : https://docs.starlingx.io/configuration/host_interface_network_config.html#:~:text=When%20a%20host%20is%20added,system%20host%2Dif%2Dadd DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Tue Feb 9 06:58:15 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 9 Feb 2021 12:28:15 +0530 Subject: [Starlingx-discuss] Internal endpoint not found Message-ID: Hi, After installing Stx AIO Simplex, I noticed that system complain for endpoints. Should I run any scripts or activate any service after completion of Openstack installation with STX? I have referred https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/index.html Am I missing anything here? [sysadmin at controller-0 ~(keystone_admin)]$ openstack flavor list internal endpoint for compute service in RegionOne region not found [sysadmin at controller-0 ~(keystone_admin)]$ openstack image list internal endpoint for image service in RegionOne region not found [sysadmin at controller-0 ~(keystone_admin)]$ openstack compute service list internal endpoint for compute service in RegionOne region not found following endpoints are available: http://paste.openstack.org/show/802451/ Services: [sysadmin at controller-0 ~(keystone_admin)]$ openstack service list +----------------------------------+----------+-----------------+ | ID | Name | Type | +----------------------------------+----------+-----------------+ | cb8ffd5420d24a08a94679fbadf20fb7 | fm | faultmanagement | | 2445032375f5430a9df5225baa30eced | barbican | key-manager | | 24b4fad8e1f844e39cb1531cef7cf585 | keystone | identity | | 9641b61d35d9487bbdf618cdb7ba3279 | sysinv | platform | | 6baf9deb55de48b597860cdb8621cedb | patching | patching | | 891b5f63ce7f480eab3b69af1650a083 | vim | nfv | | 9d4665b6429a4b959dbfc37062951216 | smapi | smapi | +----------------------------------+----------+-----------------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Tue Feb 9 07:20:36 2021 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Tue, 9 Feb 2021 07:20:36 +0000 Subject: [Starlingx-discuss] How to deploy workloads from StarlingX GUI/CLI In-Reply-To: References: Message-ID: Hi Owen, You could create an armada application with app-gen-tool[0] and apply it by 'system application' CLI. There is no panel in StarlingX dashboard GUI to manage applications so far. [0] https://opendev.org/starlingx/tools/src/branch/master/app-gen-tool Mingyuan From: Owen Yuen Sent: Tuesday, February 9, 2021 10:25 To: starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut ; Aidan Seguin-McPeake Subject: [Starlingx-discuss] How to deploy workloads from StarlingX GUI/CLI Is it possible to deploy a workload via STX instead of via kubectl or kubernetes GUI directly? We are running an distributed cloud AIO duplex setup so our worker hosts are on con0 and 1 if that makes a difference. Also, how does StarlingX manage workloads from the GUI? Any help would be greatly appreciated. Thanks Owen -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Feb 9 07:23:55 2021 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 9 Feb 2021 07:23:55 +0000 Subject: [Starlingx-discuss] Internal endpoint not found In-Reply-To: References: Message-ID: Hi: You are using host service, please refer [1] to use another environment to get openstack endpoint. Please do NOT use ‘source /etc/platform/openrc’ which will be using starlingx session ,not openstack application session. [1] https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#local-cli Thanks. BR Austin Sun. From: open infra Sent: Tuesday, February 9, 2021 2:58 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Internal endpoint not found Hi, After installing Stx AIO Simplex, I noticed that system complain for endpoints. Should I run any scripts or activate any service after completion of Openstack installation with STX? I have referred https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/index.html Am I missing anything here? [sysadmin at controller-0 ~(keystone_admin)]$ openstack flavor list internal endpoint for compute service in RegionOne region not found [sysadmin at controller-0 ~(keystone_admin)]$ openstack image list internal endpoint for image service in RegionOne region not found [sysadmin at controller-0 ~(keystone_admin)]$ openstack compute service list internal endpoint for compute service in RegionOne region not found following endpoints are available: http://paste.openstack.org/show/802451/ Services: [sysadmin at controller-0 ~(keystone_admin)]$ openstack service list +----------------------------------+----------+-----------------+ | ID | Name | Type | +----------------------------------+----------+-----------------+ | cb8ffd5420d24a08a94679fbadf20fb7 | fm | faultmanagement | | 2445032375f5430a9df5225baa30eced | barbican | key-manager | | 24b4fad8e1f844e39cb1531cef7cf585 | keystone | identity | | 9641b61d35d9487bbdf618cdb7ba3279 | sysinv | platform | | 6baf9deb55de48b597860cdb8621cedb | patching | patching | | 891b5f63ce7f480eab3b69af1650a083 | vim | nfv | | 9d4665b6429a4b959dbfc37062951216 | smapi | smapi | +----------------------------------+----------+-----------------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanathukumar14 at gmail.com Tue Feb 9 10:39:48 2021 From: sanathukumar14 at gmail.com (sanath kumar) Date: Tue, 9 Feb 2021 16:09:48 +0530 Subject: [Starlingx-discuss] Help to proceed with StarlingX Message-ID: Hello StarlingX team, I have successfully installed StarlingX, I need some help to proceed further. - We have controller-0 which will act as a master. Is it possible to add worker node inside controller-0? Since we can't deploy the application on the master node(controller-0). - I have to create an application which has to be deployed inside StarlingX. Is there any other way to do it? - Since sanity test is failing, Is there any other way in which I can check if StarlingX is working properly? Regards, Sanath Kumar +91 8971546076 -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Feb 9 14:52:15 2021 From: scott.little at windriver.com (Scott Little) Date: Tue, 9 Feb 2021 09:52:15 -0500 Subject: [Starlingx-discuss] Connection timeout for cengn build repo In-Reply-To: References: Message-ID: Your download should be pulling from *mirror*.starlingx.cengn.ca. We need to track down where the *build*.starlingx.cengn.ca reference is comming from. Scott On 2021-02-08 7:00 a.m., Shashi, Yatindra wrote: > > **[Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi Team, > > > I try to build docker image (cgcs-root]$ > ./build-tools/build-docker-images/build-stx-base.sh) for the base of > STX but it fails due to connection timeout to the repos as below. > Can someone tell me why repo > (http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml > ) is not reachable. > I tried from the browser and I get error that this site cannot be reached. > > > ------------------- > > Step 4/5 : COPY stx.repo / > > ---> 4c8394607ebc > > Step 5/5 : RUN set -ex ;    sed -i '/\[main\]/ atimeout=120' > /etc/yum.conf ;    mv /stx.repo /etc/yum.repos.d/ ;    yum upgrade > --disablerepo=* ${REPO_OPTS} -y ;    yum install --disablerepo=* > ${REPO_OPTS} -y         qemu-img openssh-clients       >   python3         python3-pip python3-wheel         > rh-python36-mod_wsgi         ;    rm -rf         /var/log/*         > /tmp/*         /var/tmp/* > > ---> Running in 4493c0916b00 > > + sed -i '/\[main\]/ atimeout=120' /etc/yum.conf > > + mv /stx.repo /etc/yum.repos.d/ > > + yum upgrade '--disablerepo=*' --enablerepo=ussuri-ceph > --enablerepo=ussuri-wsgi -y > > Loaded plugins: fastestmirror, ovl > > Determining fastest mirrors > > http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: > [Errno 12] Timeout on > http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: > (28, 'Connection timed out after 120000 milliseconds') > > Trying other mirror. > > http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: > [Errno 12] Timeout on > http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: > (28, 'Connection timed out after 120000 milliseconds') > > Trying other mirror. > > http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: > [Errno 12] Timeout on > http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: > (28, 'Connection timed out after 120000 milliseconds') > > Trying other mirror. > > --------------- > > Mit freundlichen Grüßen/with best regards, > > *Yatindra Shashi* > > /  IoTG DE- Intel Corporation/ > >    Munich, Germany > > *A close up of a sign Description automatically generated* > > > > Intel Deutschland GmbH > Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany > Tel: +49 89 99 8853-0, www.intel.de > Managing Directors: Christin Eisenschmid, Gary Kershaw > Chairperson of the Supervisory Board: Nicole Lau > Registered Office: Munich > Commercial Register: Amtsgericht Muenchen HRB 186928 > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 5821 bytes Desc: not available URL: From scott.little at windriver.com Tue Feb 9 15:03:22 2021 From: scott.little at windriver.com (Scott Little) Date: Tue, 9 Feb 2021 10:03:22 -0500 Subject: [Starlingx-discuss] Connection timeout for cengn build repo In-Reply-To: References: Message-ID: I don't see anything in out source code that could be generating the *build*.starlingx.cengn.ca referenc. Are you injecting the *build*.starlingx.cengn.ca reference via the '--repo' argument? Scott On 2021-02-09 9:52 a.m., Scott Little wrote: > > Your download should be pulling from *mirror*.starlingx.cengn.ca. > > We need to track down where the *build*.starlingx.cengn.ca reference > is comming from. > > Scott > > > On 2021-02-08 7:00 a.m., Shashi, Yatindra wrote: >> >> **[Please note: This e-mail is from an EXTERNAL e-mail address] >> >> Hi Team, >> >> >> I try to build docker image (cgcs-root]$ >> ./build-tools/build-docker-images/build-stx-base.sh) for the base of >> STX but it fails due to connection timeout to the repos as below. >> Can someone tell me why repo >> (http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml >> ) is not reachable. >> I tried from the browser and I get error that this site cannot be >> reached. >> >> >> ------------------- >> >> Step 4/5 : COPY stx.repo / >> >> ---> 4c8394607ebc >> >> Step 5/5 : RUN set -ex ;    sed -i '/\[main\]/ atimeout=120' >> /etc/yum.conf ;    mv /stx.repo /etc/yum.repos.d/ ;    yum upgrade >> --disablerepo=* ${REPO_OPTS} -y ;    yum install --disablerepo=* >> ${REPO_OPTS} -y         qemu-img openssh-clients         python3 >> python3-pip         python3-wheel rh-python36-mod_wsgi         ;    >> rm -rf /var/log/*         /tmp/*         /var/tmp/* >> >> ---> Running in 4493c0916b00 >> >> + sed -i '/\[main\]/ atimeout=120' /etc/yum.conf >> >> + mv /stx.repo /etc/yum.repos.d/ >> >> + yum upgrade '--disablerepo=*' --enablerepo=ussuri-ceph >> --enablerepo=ussuri-wsgi -y >> >> Loaded plugins: fastestmirror, ovl >> >> Determining fastest mirrors >> >> http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: >> [Errno 12] Timeout on >> http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: >> (28, 'Connection timed out after 120000 milliseconds') >> >> Trying other mirror. >> >> http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: >> [Errno 12] Timeout on >> http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: >> (28, 'Connection timed out after 120000 milliseconds') >> >> Trying other mirror. >> >> http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: >> [Errno 12] Timeout on >> http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: >> (28, 'Connection timed out after 120000 milliseconds') >> >> Trying other mirror. >> >> --------------- >> >> Mit freundlichen Grüßen/with best regards, >> >> *Yatindra Shashi* >> >> /  IoTG DE- Intel Corporation/ >> >>    Munich, Germany >> >> *A close up of a sign Description automatically generated* >> >> >> >> Intel Deutschland GmbH >> Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany >> Tel: +49 89 99 8853-0, www.intel.de >> Managing Directors: Christin Eisenschmid, Gary Kershaw >> Chairperson of the Supervisory Board: Nicole Lau >> Registered Office: Munich >> Commercial Register: Amtsgericht Muenchen HRB 186928 >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 5821 bytes Desc: not available URL: From ildiko.vancsa at gmail.com Tue Feb 9 15:05:11 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 9 Feb 2021 16:05:11 +0100 Subject: [Starlingx-discuss] StarlingX hands-on workshop materials as getting started guide Message-ID: <944DA0B9-9B69-45E1-8E7A-8728A6666D1A@gmail.com> Hi StarlingX Community, I know that we haven’t had the chance to run the hands-on workshop for a while now due to the changes in how events are held, but I think it would be good to dig up the materials that we had to utilize them. StarlingX can get very complex, especially if someone is not familiar with all the components that the project integrates on top of the services the community is actively designing and developing. I think it would be a good exercise to look into the materials that we had to maybe turn them into a getting started guide in terms of how to explore the key features of the project. What do people think? To get started, does any have the pointers to the exercises and any documentation we had for the training or have the materials saved somewhere to share? Thanks, Ildikó From alexandru.dimofte at intel.com Tue Feb 9 16:11:36 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 9 Feb 2021 16:11:36 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210209T005305Z Message-ID: Sanity Test from 2021-February-09 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210209T005305Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210209T005305Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From Bill.Zvonar at windriver.com Tue Feb 9 17:02:28 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 9 Feb 2021 17:02:28 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Feb 10, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community calls coming up tomorrow. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210210T1500 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From openinfradn at gmail.com Tue Feb 9 17:42:02 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 9 Feb 2021 23:12:02 +0530 Subject: [Starlingx-discuss] Internal endpoint not found In-Reply-To: References: Message-ID: Thanks Sun. Is it possible to retrieve helm endpoint? I was trying to access openstack GUI via :31000. but it seems the dashboard is not accessible. I just need to verify if the helm endpoint_domain is already set. I also noticed that port 31000 is not open (used nmap against the flooring IP). On Tue, Feb 9, 2021 at 12:54 PM Sun, Austin wrote: > Hi: > > You are using host service, please refer [1] to use another environment > to get openstack endpoint. > > > > Please do NOT use ‘source /etc/platform/openrc’ which will be using > starlingx session ,not openstack application session. > > > > > > > > [1] > https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#local-cli > > > > Thanks. > > BR > Austin Sun. > > *From:* open infra > *Sent:* Tuesday, February 9, 2021 2:58 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Internal endpoint not found > > > > Hi, > > > > After installing Stx AIO Simplex, I noticed that system complain for > endpoints. > > Should I run any scripts or activate any service after completion of > Openstack installation with STX? > > I have referred > https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/index.html > > > > Am I missing anything here? > > > > [sysadmin at controller-0 ~(keystone_admin)]$ openstack flavor list > internal endpoint for compute service in RegionOne region not found > [sysadmin at controller-0 ~(keystone_admin)]$ openstack image list > internal endpoint for image service in RegionOne region not found > > [sysadmin at controller-0 ~(keystone_admin)]$ openstack compute service list > internal endpoint for compute service in RegionOne region not found > > > > following endpoints are available: > > http://paste.openstack.org/show/802451/ > > > > Services: > > [sysadmin at controller-0 ~(keystone_admin)]$ openstack service list > +----------------------------------+----------+-----------------+ > | ID | Name | Type | > +----------------------------------+----------+-----------------+ > | cb8ffd5420d24a08a94679fbadf20fb7 | fm | faultmanagement | > | 2445032375f5430a9df5225baa30eced | barbican | key-manager | > | 24b4fad8e1f844e39cb1531cef7cf585 | keystone | identity | > | 9641b61d35d9487bbdf618cdb7ba3279 | sysinv | platform | > | 6baf9deb55de48b597860cdb8621cedb | patching | patching | > | 891b5f63ce7f480eab3b69af1650a083 | vim | nfv | > | 9d4665b6429a4b959dbfc37062951216 | smapi | smapi | > +----------------------------------+----------+-----------------+ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sriram.Dharwadkar at commscope.com Tue Feb 9 18:17:46 2021 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Tue, 9 Feb 2021 18:17:46 +0000 Subject: [Starlingx-discuss] Upgrade MLNX-OFED to 5.2-2.2.0.0 in StarlingX-4.0 to latest Message-ID: Hi, I have installed distributed StarlingX 4.0. We are facing one issue wrt MLNX-OFED version. In the ConnectX-4 EN Nic that we are using in our platform, we see some issue related to spoof check parameter. In Kubernetes environment, after pod restart, if pod attaches to the same VF that it was using previously, spoof check becomes ON automatically and the traffic stops going out of that VF. To solve that issue, our hardware vendor has suggested to the upgrade of MLNX-OFED(5.2-2.2.0.0) and Firmware upragde(MFT 4.16.1). In starlingX environment, I tried doing # ./install.sh --oem -E- There are missing packages that are required for installation of MFT. -I- You can install missing packages using: yum install gcc rpm-build kernel-devel-4.18.0-147.3.1.rt24.96.el8_1.tis.8.x86_64 I could install gcc and kernel-devel-4.18.0-147.3.1.rt24.96.el8_1.tis.8.x86_64, but rpm-build installation is not going through because of some dependency. How do we go about upgrading these packages ? Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Feb 10 00:45:43 2021 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 10 Feb 2021 00:45:43 +0000 Subject: [Starlingx-discuss] Internal endpoint not found In-Reply-To: References: Message-ID: Hi 1) Have you check if openstack application is applied successfully via “system application-list" Command ? 2) are you following [1] to configure helm endpoint domain? Please be noticed: “This command also changes the containerized OpenStack Horizon to listen on horizon.my-starlingx-domain.my-company.com:80 instead of the initial :31000.” “You must configure { ‘*.my-starlingx-domain.my-company.com’: –> oam‐floating‐ip‐address } in the external DNS server that owns my-company.com.” [1] https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#configure-helm-endpoint-domain Thanks. BR Austin Sun. From: open infra Sent: Wednesday, February 10, 2021 1:42 AM To: Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Internal endpoint not found Thanks Sun. Is it possible to retrieve helm endpoint? I was trying to access openstack GUI via :31000. but it seems the dashboard is not accessible. I just need to verify if the helm endpoint_domain is already set. I also noticed that port 31000 is not open (used nmap against the flooring IP). On Tue, Feb 9, 2021 at 12:54 PM Sun, Austin > wrote: Hi: You are using host service, please refer [1] to use another environment to get openstack endpoint. Please do NOT use ‘source /etc/platform/openrc’ which will be using starlingx session ,not openstack application session. [1] https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#local-cli Thanks. BR Austin Sun. From: open infra > Sent: Tuesday, February 9, 2021 2:58 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Internal endpoint not found Hi, After installing Stx AIO Simplex, I noticed that system complain for endpoints. Should I run any scripts or activate any service after completion of Openstack installation with STX? I have referred https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/index.html Am I missing anything here? [sysadmin at controller-0 ~(keystone_admin)]$ openstack flavor list internal endpoint for compute service in RegionOne region not found [sysadmin at controller-0 ~(keystone_admin)]$ openstack image list internal endpoint for image service in RegionOne region not found [sysadmin at controller-0 ~(keystone_admin)]$ openstack compute service list internal endpoint for compute service in RegionOne region not found following endpoints are available: http://paste.openstack.org/show/802451/ Services: [sysadmin at controller-0 ~(keystone_admin)]$ openstack service list +----------------------------------+----------+-----------------+ | ID | Name | Type | +----------------------------------+----------+-----------------+ | cb8ffd5420d24a08a94679fbadf20fb7 | fm | faultmanagement | | 2445032375f5430a9df5225baa30eced | barbican | key-manager | | 24b4fad8e1f844e39cb1531cef7cf585 | keystone | identity | | 9641b61d35d9487bbdf618cdb7ba3279 | sysinv | platform | | 6baf9deb55de48b597860cdb8621cedb | patching | patching | | 891b5f63ce7f480eab3b69af1650a083 | vim | nfv | | 9d4665b6429a4b959dbfc37062951216 | smapi | smapi | +----------------------------------+----------+-----------------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Wed Feb 10 04:38:51 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 10 Feb 2021 10:08:51 +0530 Subject: [Starlingx-discuss] Internal endpoint not found In-Reply-To: References: Message-ID: Hi, On Wed, Feb 10, 2021 at 6:15 AM Sun, Austin wrote: > Hi > > 1) Have you check if openstack application is applied successfully via > “system application-list" Command ? > Yes, I did. http://paste.openstack.org/show/802492/ > 2) are you following [1] to configure helm endpoint domain? > I think this is where I messed the configuration. Assuming I can use /et/hosts file, I set a .local domain (I don't use local dns). Is it possible to revert the following setting? system service-parameter-add openstack helm endpoint_domain= > > Please be noticed: > > > > “This command also changes the containerized OpenStack Horizon to listen > on horizon.my-starlingx-domain.my-company.com:80 instead of the initial > :31000.” > > “You must configure { ‘*.my-starlingx-domain.my-company.com’: –> oam‐ > floating‐ip‐address } in the external DNS server that owns > my-company.com.” > > > > [1] > https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#configure-helm-endpoint-domain > > > > > > Thanks. > > BR > Austin Sun. > > > > *From:* open infra > *Sent:* Wednesday, February 10, 2021 1:42 AM > *To:* Sun, Austin > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Internal endpoint not found > > > > Thanks Sun. > > > > Is it possible to retrieve helm endpoint? > > I was trying to access openstack GUI via :31000. but it > seems the dashboard is not accessible. > > I just need to verify if the helm endpoint_domain is already set. > > I also noticed that port 31000 is not open (used nmap against the flooring > IP). > > > > > > On Tue, Feb 9, 2021 at 12:54 PM Sun, Austin wrote: > > Hi: > > You are using host service, please refer [1] to use another environment > to get openstack endpoint. > > > > Please do NOT use ‘source /etc/platform/openrc’ which will be using > starlingx session ,not openstack application session. > > > > > > > > [1] > https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#local-cli > > > > Thanks. > > BR > Austin Sun. > > *From:* open infra > *Sent:* Tuesday, February 9, 2021 2:58 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Internal endpoint not found > > > > Hi, > > > > After installing Stx AIO Simplex, I noticed that system complain for > endpoints. > > Should I run any scripts or activate any service after completion of > Openstack installation with STX? > > I have referred > https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/index.html > > > > Am I missing anything here? > > > > [sysadmin at controller-0 ~(keystone_admin)]$ openstack flavor list > internal endpoint for compute service in RegionOne region not found > [sysadmin at controller-0 ~(keystone_admin)]$ openstack image list > internal endpoint for image service in RegionOne region not found > > [sysadmin at controller-0 ~(keystone_admin)]$ openstack compute service list > internal endpoint for compute service in RegionOne region not found > > > > following endpoints are available: > > http://paste.openstack.org/show/802451/ > > > > Services: > > [sysadmin at controller-0 ~(keystone_admin)]$ openstack service list > +----------------------------------+----------+-----------------+ > | ID | Name | Type | > +----------------------------------+----------+-----------------+ > | cb8ffd5420d24a08a94679fbadf20fb7 | fm | faultmanagement | > | 2445032375f5430a9df5225baa30eced | barbican | key-manager | > | 24b4fad8e1f844e39cb1531cef7cf585 | keystone | identity | > | 9641b61d35d9487bbdf618cdb7ba3279 | sysinv | platform | > | 6baf9deb55de48b597860cdb8621cedb | patching | patching | > | 891b5f63ce7f480eab3b69af1650a083 | vim | nfv | > | 9d4665b6429a4b959dbfc37062951216 | smapi | smapi | > +----------------------------------+----------+-----------------+ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From OwenYuen at cmail.carleton.ca Tue Feb 9 20:19:02 2021 From: OwenYuen at cmail.carleton.ca (Owen Yuen) Date: Tue, 9 Feb 2021 20:19:02 +0000 Subject: [Starlingx-discuss] Controller-1 won't come online after "system host-update 2 personality=controller" Message-ID: Hi I’m configuring a distributed cloud AIO duplex setup with 2 subclouds. On the second subcloud I’ve successfully bootstrapped con-0 but I’m having issues unlocking con-1 After I start con-0, I go on con-0 and do the “system host-update 2 personality=controller” command and wait for con-1 to reboot but after waiting over an hour it still says offline [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | locked | disabled | offline | | 3 | None | None | locked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+ Also I don’t know why ID: 3 showed up when it wasn’t there before. When I try to console into con-1 it says “waiting for this node to configured”. Any helps is appreciated, please let me know how I can find log files to help further troubleshoot. Thanks Owen -------------- next part -------------- An HTML attachment was scrubbed... URL: From yadav.akshay58 at gmail.com Wed Feb 10 08:36:29 2021 From: yadav.akshay58 at gmail.com (yadav.akshay58 at gmail.com) Date: Wed, 10 Feb 2021 14:06:29 +0530 Subject: [Starlingx-discuss] Controller-1 won't come online after "system host-update 2 personality=controller" In-Reply-To: References: Message-ID: Hello Owen, Your con1 gets bootstrapped successfully and when it goes to reboot, it comes up again on PXE network of con0 I.e. mgmt network which results in creation of 3rd node which again says waiting to be configured. Solution 1 is to keep watching con1 and when it goes to reboot, then change the boot order to hard drive first. But this is only for one time. In future if your system reboots, you will loose your deployment. Solution 2 is to set the boot order of con1 like hard drive and then pxe boot in bios. Now you can manually put con1 to one shot boot up on pxe and then it will boot up from hard drive always. Regards Akshay Hughes Systique Corporation HSC Sent from my iPhone > On 10-Feb-2021, at 1:49 AM, Owen Yuen wrote: > > Hi I’m configuring a distributed cloud AIO duplex setup with 2 subclouds. > > On the second subcloud I’ve successfully bootstrapped con-0 but I’m having issues unlocking con-1 > > After I start con-0, I go on con-0 and do the “system host-update 2 personality=controller” command and wait for con-1 to reboot but after waiting over an hour it still says offline > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | availability | > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | available | > | 2 | controller-1 | controller | locked | disabled | offline | > | 3 | None | None | locked | disabled | offline | > +----+--------------+-------------+----------------+-------------+--------------+ > > Also I don’t know why ID: 3 showed up when it wasn’t there before. > > When I try to console into con-1 it says “waiting for this node to configured”. > > Any helps is appreciated, please let me know how I can find log files to help further troubleshoot. > > Thanks > > Owen > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Wed Feb 10 12:38:38 2021 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Wed, 10 Feb 2021 12:38:38 +0000 Subject: [Starlingx-discuss] STX_build_docker_flock_images - Build failed Message-ID: HI scott Now image build fail with stx-keystone-api-proxy, what about skip build fail image, to go on other image build Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io Sent: Sunday, January 31, 2021 2:22 PM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 32, Issue 126 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. [stable] [build-report] STX_build_docker_flock_images - Build # 321 - Failure! (build.starlingx at gmail.com) 2. [stable] [build-report] master STX_build_docker_images_layered - Build # 81 - Still Failing! (build.starlingx at gmail.com) 3. [build-report] master STX_build_layer_containers_master_master - Build # 105 - Still Failing! (build.starlingx at gmail.com) 4. [stable] [build-report] master STX_build_docker_images_layered - Build # 82 - Still Failing! (build.starlingx at gmail.com) 5. [build-report] master STX_build_layer_containers_master_master - Build # 106 - Still Failing! (build.starlingx at gmail.com) 6. [stable] [build-report] master STX_build_docker_images_layered - Build # 83 - Still Failing! (build.starlingx at gmail.com) 7. [build-report] master STX_build_layer_containers_master_master - Build # 107 - Still Failing! (build.starlingx at gmail.com) ---------------------------------------------------------------------- Message: 1 Date: Sat, 30 Jan 2021 19:05:51 -0500 (EST) From: build.starlingx at gmail.com To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_flock_images - Build # 321 - Failure! Message-ID: <472945709.51.1612051552281.JavaMail.javamailuser at localhost> Content-Type: text/plain; charset="utf-8" Project: STX_build_docker_flock_images Build #: 321 Status: Failure Timestamp: 20210131T000529Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs -------------------------------------------------------------------------------- Parameters WEB_HOST: mirror.starlingx.cengn.ca MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z OS: centos MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root BASE_VERSION: master-stable-20210130T233107Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs REGISTRY_USERID: slittlewrs LATEST_PREFIX: master PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/logs PUBLISH_TIMESTAMP: 20210130T233107Z FLOCK_VERSION: master-centos-stable-20210130T233107Z WEB_HOST_PORT: 80 PREFIX: master TIMESTAMP: 20210130T233107Z BUILD_STREAM: stable REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/outputs REGISTRY: docker.io ------------------------------ Message: 2 Date: Sat, 30 Jan 2021 19:05:53 -0500 (EST) From: build.starlingx at gmail.com To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images_layered - Build # 81 - Still Failing! Message-ID: <1654307004.54.1612051554528.JavaMail.javamailuser at localhost> Content-Type: text/plain; charset="utf-8" Project: STX_build_docker_images_layered Build #: 81 Status: Still Failing Timestamp: 20210130T234712Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs MASTER_BUILD_NUMBER: 105 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/logs MASTER_JOB_NAME: STX_build_layer_containers_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers PUBLISH_TIMESTAMP: 20210130T233107Z DOCKER_BUILD_ID: jenkins-master-containers-20210130T233107Z-builder TIMESTAMP: 20210130T233107Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/outputs ------------------------------ Message: 3 Date: Sat, 30 Jan 2021 19:05:56 -0500 (EST) From: build.starlingx at gmail.com To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 105 - Still Failing! Message-ID: <1081325300.57.1612051556649.JavaMail.javamailuser at localhost> Content-Type: text/plain; charset="utf-8" Project: STX_build_layer_containers_master_master Build #: 105 Status: Still Failing Timestamp: 20210130T233107Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true ------------------------------ Message: 4 Date: Sun, 31 Jan 2021 01:03:45 -0500 (EST) From: build.starlingx at gmail.com To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images_layered - Build # 82 - Still Failing! Message-ID: <1063712408.60.1612073026255.JavaMail.javamailuser at localhost> Content-Type: text/plain; charset="utf-8" Project: STX_build_docker_images_layered Build #: 82 Status: Still Failing Timestamp: 20210131T052236Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T050619Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210131T050619Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T050619Z/logs MASTER_BUILD_NUMBER: 106 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T050619Z/logs MASTER_JOB_NAME: STX_build_layer_containers_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers PUBLISH_TIMESTAMP: 20210131T050619Z DOCKER_BUILD_ID: jenkins-master-containers-20210131T050619Z-builder TIMESTAMP: 20210131T050619Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T050619Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T050619Z/outputs ------------------------------ Message: 5 Date: Sun, 31 Jan 2021 01:03:48 -0500 (EST) From: build.starlingx at gmail.com To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 106 - Still Failing! Message-ID: <470021719.63.1612073028646.JavaMail.javamailuser at localhost> Content-Type: text/plain; charset="utf-8" Project: STX_build_layer_containers_master_master Build #: 106 Status: Still Failing Timestamp: 20210131T050619Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T050619Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true ------------------------------ Message: 6 Date: Sun, 31 Jan 2021 01:21:38 -0500 (EST) From: build.starlingx at gmail.com To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images_layered - Build # 83 - Still Failing! Message-ID: <653138737.66.1612074099171.JavaMail.javamailuser at localhost> Content-Type: text/plain; charset="utf-8" Project: STX_build_docker_images_layered Build #: 83 Status: Still Failing Timestamp: 20210131T062137Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T060528Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210131T060528Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T060528Z/logs MASTER_BUILD_NUMBER: 107 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T060528Z/logs MASTER_JOB_NAME: STX_build_layer_containers_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers PUBLISH_TIMESTAMP: 20210131T060528Z DOCKER_BUILD_ID: jenkins-master-containers-20210131T060528Z-builder TIMESTAMP: 20210131T060528Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T060528Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T060528Z/outputs ------------------------------ Message: 7 Date: Sun, 31 Jan 2021 01:21:40 -0500 (EST) From: build.starlingx at gmail.com To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 107 - Still Failing! Message-ID: <6419772.69.1612074101230.JavaMail.javamailuser at localhost> Content-Type: text/plain; charset="utf-8" Project: STX_build_layer_containers_master_master Build #: 107 Status: Still Failing Timestamp: 20210131T060528Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T060528Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 32, Issue 126 ************************************************** From haochuan.z.chen at intel.com Wed Feb 10 12:40:09 2021 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Wed, 10 Feb 2021 12:40:09 +0000 Subject: [Starlingx-discuss] rook-ceph patch for enhancement to enable cephfs Message-ID: Hi Bob This is my patch to enable cephfs for rook-ceph. It is enhancement. Please review thanks. https://review.opendev.org/c/starlingx/utilities/+/760503 BR! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Feb 10 15:45:33 2021 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Feb 2021 10:45:33 -0500 Subject: [Starlingx-discuss] STX_build_docker_flock_images - Build failed In-Reply-To: References: Message-ID: <9584c983-d836-8e78-a92e-987aa531205a@windriver.com> build-stx-images.sh supports a ' --skip ' argument if you wish to build all other images while excluding one that is problematic. Scott On 2021-02-10 7:38 a.m., Chen, Haochuan Z wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > HI scott > > Now image build fail with stx-keystone-api-proxy, what about skip build fail image, to go on other image build > > Thanks! > > Martin, Chen > IOTG, Software Engineer > 021-61164330 > > -----Original Message----- > From: starlingx-discuss-request at lists.starlingx.io > Sent: Sunday, January 31, 2021 2:22 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Starlingx-discuss Digest, Vol 32, Issue 126 > > Send Starlingx-discuss mailing list submissions to > starlingx-discuss at lists.starlingx.io > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > or, via email, send a message with subject or body 'help' to > starlingx-discuss-request at lists.starlingx.io > > You can reach the person managing the list at > starlingx-discuss-owner at lists.starlingx.io > > When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." > > > Today's Topics: > > 1. [stable] [build-report] STX_build_docker_flock_images - Build > # 321 - Failure! (build.starlingx at gmail.com) > 2. [stable] [build-report] master > STX_build_docker_images_layered - Build # 81 - Still Failing! > (build.starlingx at gmail.com) > 3. [build-report] master > STX_build_layer_containers_master_master - Build # 105 - Still > Failing! (build.starlingx at gmail.com) > 4. [stable] [build-report] master > STX_build_docker_images_layered - Build # 82 - Still Failing! > (build.starlingx at gmail.com) > 5. [build-report] master > STX_build_layer_containers_master_master - Build # 106 - Still > Failing! (build.starlingx at gmail.com) > 6. [stable] [build-report] master > STX_build_docker_images_layered - Build # 83 - Still Failing! > (build.starlingx at gmail.com) > 7. [build-report] master > STX_build_layer_containers_master_master - Build # 107 - Still > Failing! (build.starlingx at gmail.com) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 30 Jan 2021 19:05:51 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [stable] [build-report] > STX_build_docker_flock_images - Build # 321 - Failure! > Message-ID: > <472945709.51.1612051552281.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_docker_flock_images > Build #: 321 > Status: Failure > Timestamp: 20210131T000529Z > Branch: > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs > -------------------------------------------------------------------------------- > Parameters > > WEB_HOST: mirror.starlingx.cengn.ca > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z > OS: centos > MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root > BASE_VERSION: master-stable-20210130T233107Z > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs > REGISTRY_USERID: slittlewrs > LATEST_PREFIX: master > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/logs > PUBLISH_TIMESTAMP: 20210130T233107Z > FLOCK_VERSION: master-centos-stable-20210130T233107Z > WEB_HOST_PORT: 80 > PREFIX: master > TIMESTAMP: 20210130T233107Z > BUILD_STREAM: stable > REGISTRY_ORG: starlingx > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/outputs > REGISTRY: docker.io > > ------------------------------ > > Message: 2 > Date: Sat, 30 Jan 2021 19:05:53 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [stable] [build-report] master > STX_build_docker_images_layered - Build # 81 - Still Failing! > Message-ID: > <1654307004.54.1612051554528.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_docker_images_layered Build #: 81 > Status: Still Failing > Timestamp: 20210130T234712Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs > -------------------------------------------------------------------------------- > Parameters > > BRANCH: master > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z > OS: centos > MUNGED_BRANCH: master > MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs > MASTER_BUILD_NUMBER: 105 > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/logs > MASTER_JOB_NAME: STX_build_layer_containers_master_master > MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers > PUBLISH_TIMESTAMP: 20210130T233107Z > DOCKER_BUILD_ID: jenkins-master-containers-20210130T233107Z-builder > TIMESTAMP: 20210130T233107Z > OS_VERSION: 7.5.1804 > BUILD_STREAM: stable > PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/inputs > LAYER: containers > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210130T233107Z/outputs > > ------------------------------ > > Message: 3 > Date: Sat, 30 Jan 2021 19:05:56 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] master > STX_build_layer_containers_master_master - Build # 105 - Still > Failing! > Message-ID: > <1081325300.57.1612051556649.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_layer_containers_master_master > Build #: 105 > Status: Still Failing > Timestamp: 20210130T233107Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210130T233107Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > ------------------------------ > > Message: 4 > Date: Sun, 31 Jan 2021 01:03:45 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [stable] [build-report] master > STX_build_docker_images_layered - Build # 82 - Still Failing! > Message-ID: > <1063712408.60.1612073026255.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_docker_images_layered Build #: 82 > Status: Still Failing > Timestamp: 20210131T052236Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T050619Z/logs > -------------------------------------------------------------------------------- > Parameters > > BRANCH: master > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210131T050619Z > OS: centos > MUNGED_BRANCH: master > MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T050619Z/logs > MASTER_BUILD_NUMBER: 106 > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T050619Z/logs > MASTER_JOB_NAME: STX_build_layer_containers_master_master > MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers > PUBLISH_TIMESTAMP: 20210131T050619Z > DOCKER_BUILD_ID: jenkins-master-containers-20210131T050619Z-builder > TIMESTAMP: 20210131T050619Z > OS_VERSION: 7.5.1804 > BUILD_STREAM: stable > PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T050619Z/inputs > LAYER: containers > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T050619Z/outputs > > ------------------------------ > > Message: 5 > Date: Sun, 31 Jan 2021 01:03:48 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] master > STX_build_layer_containers_master_master - Build # 106 - Still > Failing! > Message-ID: > <470021719.63.1612073028646.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_layer_containers_master_master > Build #: 106 > Status: Still Failing > Timestamp: 20210131T050619Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T050619Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > ------------------------------ > > Message: 6 > Date: Sun, 31 Jan 2021 01:21:38 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [stable] [build-report] master > STX_build_docker_images_layered - Build # 83 - Still Failing! > Message-ID: > <653138737.66.1612074099171.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_docker_images_layered Build #: 83 > Status: Still Failing > Timestamp: 20210131T062137Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T060528Z/logs > -------------------------------------------------------------------------------- > Parameters > > BRANCH: master > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210131T060528Z > OS: centos > MUNGED_BRANCH: master > MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T060528Z/logs > MASTER_BUILD_NUMBER: 107 > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T060528Z/logs > MASTER_JOB_NAME: STX_build_layer_containers_master_master > MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers > PUBLISH_TIMESTAMP: 20210131T060528Z > DOCKER_BUILD_ID: jenkins-master-containers-20210131T060528Z-builder > TIMESTAMP: 20210131T060528Z > OS_VERSION: 7.5.1804 > BUILD_STREAM: stable > PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T060528Z/inputs > LAYER: containers > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210131T060528Z/outputs > > ------------------------------ > > Message: 7 > Date: Sun, 31 Jan 2021 01:21:40 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] master > STX_build_layer_containers_master_master - Build # 107 - Still > Failing! > Message-ID: <6419772.69.1612074101230.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_layer_containers_master_master > Build #: 107 > Status: Still Failing > Timestamp: 20210131T060528Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210131T060528Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > ------------------------------ > > End of Starlingx-discuss Digest, Vol 32, Issue 126 > ************************************************** From Bill.Zvonar at windriver.com Wed Feb 10 15:57:53 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 10 Feb 2021 15:57:53 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Feb 10, 2021) In-Reply-To: References: Message-ID: >From today's call: * Standing Topics * Sanity * seems to be green of late * Gerrit Reviews in Need of Attention * none this week * Topics for this Week * none this week * ARs from Previous Meetings * no updates this week * Open Requests for Help * [STARLINGX 4.0] APPLICATION UPLOAD GETTING FAILED - with LAG Interface * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010779.html * Greg suggested raise 2 Launchpads - one for s/w, one for docs * https://bugs.launchpad.net/starlingx/+bug/1915285 * https://bugs.launchpad.net/starlingx/+bug/1915231 * [HSC] - Openstack STX Application TAR upload failure - In case of LAG type Data Interfaces * [HSC] - Is there any way to preserve storage data in case we plan to migrate existing starlingx setup to a new StarlingX Release? (For instance StarlingX 4.0 to StarlingX 5.0) * [HSC] - Is there any future plan to support loseless upgrades in StarlingX i.e all VMs and data inside VMs is preserved while upgrading? Is there any way to upgrade StarlingX R4.0 to StarlingX R5.0 with zero down time and seamlessly migrating all the workloads (OpenStack VMs, Baremetals, Kubernetes Pods) as well. * Greg noted that StarlingX doesn't support Upgrades, but there are commercial products based on StarlingX that support this * [sanath] * I have completed with StarlingX installation. But Sanity test failed when I performed. Is there any other method to perform the test to ensure StarlingX is working. * per Greg, consider deploying as AIO-SX if you just have the one server * look in /etc/platform.conf to see how you're configured * or do a system host show controller 0 * if configured as Standard, re-install as AIO-SX * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010794.html * [sanath] I have only controller-0, which acts as a master. Normally inorder to deploy something we need worker node. How do we add worker node to it? * How do we deploy in StarlingX? * Is there any demo which can show StarlingX has low latency? * STX AIO Virtual OAM_IP? * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010777.html * per Greg, you have to be on the server to see the OAM IP, he'll respond * Help with StarlingX using Raspberry Pi * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010783.html * Sanath withdrew the request * HostFs update failed: Not enough free space on cgts-vg * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010788.html * no issue per se, other than to get more space * Upgrade MLNX-OFED to 5.2-2.2.0.0 in StarlingX-4.0 to latest * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010801.html * seems like a feature request, will redirect to the TSC * Build Matters (if required) * nothing this week -----Original Message----- From: Zvonar, Bill Sent: Tuesday, February 9, 2021 12:02 PM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (Feb 10, 2021) Hi all, reminder of the weekly TSC/Community calls coming up tomorrow. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210210T1500 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From haochuan.z.chen at intel.com Thu Feb 11 03:47:51 2021 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Thu, 11 Feb 2021 03:47:51 +0000 Subject: [Starlingx-discuss] STX_build_docker_flock_images - Build failed In-Reply-To: <9584c983-d836-8e78-a92e-987aa531205a@windriver.com> References: <9584c983-d836-8e78-a92e-987aa531205a@windriver.com> Message-ID: Thanks! What about we build image skip stx-keystone-api-proxy once. As I already merge this patch, but starlingx/ceph-manager image, could not be built and push to docker hub. As image build fail with stx-keystone-api-proxy and block this image build. https://review.opendev.org/c/starlingx/utilities/+/760503 I wish we could finish all image build once by skip current build fail. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -----Original Message----- From: Scott Little Sent: Wednesday, February 10, 2021 11:46 PM To: Chen, Haochuan Z Cc: starlingx-discuss at lists.starlingx.io Subject: Re: STX_build_docker_flock_images - Build failed build-stx-images.sh supports a ' --skip ' argument if you wish to build all other images while excluding one that is problematic. Scott On 2021-02-10 7:38 a.m., Chen, Haochuan Z wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > HI scott > > Now image build fail with stx-keystone-api-proxy, what about skip > build fail image, to go on other image build > > Thanks! > > Martin, Chen > IOTG, Software Engineer > 021-61164330 > > -----Original Message----- > From: starlingx-discuss-request at lists.starlingx.io > > Sent: Sunday, January 31, 2021 2:22 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Starlingx-discuss Digest, Vol 32, Issue 126 > > Send Starlingx-discuss mailing list submissions to > starlingx-discuss at lists.starlingx.io > > To subscribe or unsubscribe via the World Wide Web, visit > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > or, via email, send a message with subject or body 'help' to > starlingx-discuss-request at lists.starlingx.io > > You can reach the person managing the list at > starlingx-discuss-owner at lists.starlingx.io > > When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." > > > Today's Topics: > > 1. [stable] [build-report] STX_build_docker_flock_images - Build > # 321 - Failure! (build.starlingx at gmail.com) > 2. [stable] [build-report] master > STX_build_docker_images_layered - Build # 81 - Still Failing! > (build.starlingx at gmail.com) > 3. [build-report] master > STX_build_layer_containers_master_master - Build # 105 - Still > Failing! (build.starlingx at gmail.com) > 4. [stable] [build-report] master > STX_build_docker_images_layered - Build # 82 - Still Failing! > (build.starlingx at gmail.com) > 5. [build-report] master > STX_build_layer_containers_master_master - Build # 106 - Still > Failing! (build.starlingx at gmail.com) > 6. [stable] [build-report] master > STX_build_docker_images_layered - Build # 83 - Still Failing! > (build.starlingx at gmail.com) > 7. [build-report] master > STX_build_layer_containers_master_master - Build # 107 - Still > Failing! (build.starlingx at gmail.com) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 30 Jan 2021 19:05:51 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [stable] [build-report] > STX_build_docker_flock_images - Build # 321 - Failure! > Message-ID: > <472945709.51.1612051552281.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_docker_flock_images Build #: 321 > Status: Failure > Timestamp: 20210131T000529Z > Branch: > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210130T233107Z/logs > ---------------------------------------------------------------------- > ---------- > Parameters > > WEB_HOST: mirror.starlingx.cengn.ca > MY_WORKSPACE: > /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z > OS: centos > MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root > BASE_VERSION: master-stable-20210130T233107Z > PUBLISH_LOGS_URL: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210130T233107Z/logs > REGISTRY_USERID: slittlewrs > LATEST_PREFIX: master > PUBLISH_LOGS_BASE: > /export/mirror/starlingx/master/centos/containers/20210130T233107Z/log > s > PUBLISH_TIMESTAMP: 20210130T233107Z > FLOCK_VERSION: master-centos-stable-20210130T233107Z > WEB_HOST_PORT: 80 > PREFIX: master > TIMESTAMP: 20210130T233107Z > BUILD_STREAM: stable > REGISTRY_ORG: starlingx > PUBLISH_OUTPUTS_BASE: > /export/mirror/starlingx/master/centos/containers/20210130T233107Z/out > puts > REGISTRY: docker.io > > ------------------------------ > > Message: 2 > Date: Sat, 30 Jan 2021 19:05:53 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [stable] [build-report] master > STX_build_docker_images_layered - Build # 81 - Still Failing! > Message-ID: > <1654307004.54.1612051554528.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_docker_images_layered Build #: 81 > Status: Still Failing > Timestamp: 20210130T234712Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210130T233107Z/logs > ---------------------------------------------------------------------- > ---------- > Parameters > > BRANCH: master > MY_WORKSPACE: > /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z > OS: centos > MUNGED_BRANCH: master > MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root > PUBLISH_LOGS_URL: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210130T233107Z/logs > MASTER_BUILD_NUMBER: 105 > PUBLISH_LOGS_BASE: > /export/mirror/starlingx/master/centos/containers/20210130T233107Z/log > s > MASTER_JOB_NAME: STX_build_layer_containers_master_master > MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers > PUBLISH_TIMESTAMP: 20210130T233107Z > DOCKER_BUILD_ID: jenkins-master-containers-20210130T233107Z-builder > TIMESTAMP: 20210130T233107Z > OS_VERSION: 7.5.1804 > BUILD_STREAM: stable > PUBLISH_INPUTS_BASE: > /export/mirror/starlingx/master/centos/containers/20210130T233107Z/inp > uts > LAYER: containers > PUBLISH_OUTPUTS_BASE: > /export/mirror/starlingx/master/centos/containers/20210130T233107Z/out > puts > > ------------------------------ > > Message: 3 > Date: Sat, 30 Jan 2021 19:05:56 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] master > STX_build_layer_containers_master_master - Build # 105 - Still > Failing! > Message-ID: > <1081325300.57.1612051556649.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_layer_containers_master_master > Build #: 105 > Status: Still Failing > Timestamp: 20210130T233107Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210130T233107Z/logs > ---------------------------------------------------------------------- > ---------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > ------------------------------ > > Message: 4 > Date: Sun, 31 Jan 2021 01:03:45 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [stable] [build-report] master > STX_build_docker_images_layered - Build # 82 - Still Failing! > Message-ID: > <1063712408.60.1612073026255.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_docker_images_layered Build #: 82 > Status: Still Failing > Timestamp: 20210131T052236Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210131T050619Z/logs > ---------------------------------------------------------------------- > ---------- > Parameters > > BRANCH: master > MY_WORKSPACE: > /localdisk/loadbuild/jenkins/master-containers/20210131T050619Z > OS: centos > MUNGED_BRANCH: master > MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root > PUBLISH_LOGS_URL: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210131T050619Z/logs > MASTER_BUILD_NUMBER: 106 > PUBLISH_LOGS_BASE: > /export/mirror/starlingx/master/centos/containers/20210131T050619Z/log > s > MASTER_JOB_NAME: STX_build_layer_containers_master_master > MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers > PUBLISH_TIMESTAMP: 20210131T050619Z > DOCKER_BUILD_ID: jenkins-master-containers-20210131T050619Z-builder > TIMESTAMP: 20210131T050619Z > OS_VERSION: 7.5.1804 > BUILD_STREAM: stable > PUBLISH_INPUTS_BASE: > /export/mirror/starlingx/master/centos/containers/20210131T050619Z/inp > uts > LAYER: containers > PUBLISH_OUTPUTS_BASE: > /export/mirror/starlingx/master/centos/containers/20210131T050619Z/out > puts > > ------------------------------ > > Message: 5 > Date: Sun, 31 Jan 2021 01:03:48 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] master > STX_build_layer_containers_master_master - Build # 106 - Still > Failing! > Message-ID: > <470021719.63.1612073028646.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_layer_containers_master_master > Build #: 106 > Status: Still Failing > Timestamp: 20210131T050619Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210131T050619Z/logs > ---------------------------------------------------------------------- > ---------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > ------------------------------ > > Message: 6 > Date: Sun, 31 Jan 2021 01:21:38 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [stable] [build-report] master > STX_build_docker_images_layered - Build # 83 - Still Failing! > Message-ID: > <653138737.66.1612074099171.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_docker_images_layered Build #: 83 > Status: Still Failing > Timestamp: 20210131T062137Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210131T060528Z/logs > ---------------------------------------------------------------------- > ---------- > Parameters > > BRANCH: master > MY_WORKSPACE: > /localdisk/loadbuild/jenkins/master-containers/20210131T060528Z > OS: centos > MUNGED_BRANCH: master > MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root > PUBLISH_LOGS_URL: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210131T060528Z/logs > MASTER_BUILD_NUMBER: 107 > PUBLISH_LOGS_BASE: > /export/mirror/starlingx/master/centos/containers/20210131T060528Z/log > s > MASTER_JOB_NAME: STX_build_layer_containers_master_master > MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers > PUBLISH_TIMESTAMP: 20210131T060528Z > DOCKER_BUILD_ID: jenkins-master-containers-20210131T060528Z-builder > TIMESTAMP: 20210131T060528Z > OS_VERSION: 7.5.1804 > BUILD_STREAM: stable > PUBLISH_INPUTS_BASE: > /export/mirror/starlingx/master/centos/containers/20210131T060528Z/inp > uts > LAYER: containers > PUBLISH_OUTPUTS_BASE: > /export/mirror/starlingx/master/centos/containers/20210131T060528Z/out > puts > > ------------------------------ > > Message: 7 > Date: Sun, 31 Jan 2021 01:21:40 -0500 (EST) > From: build.starlingx at gmail.com > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] master > STX_build_layer_containers_master_master - Build # 107 - Still > Failing! > Message-ID: <6419772.69.1612074101230.JavaMail.javamailuser at localhost> > Content-Type: text/plain; charset="utf-8" > > Project: STX_build_layer_containers_master_master > Build #: 107 > Status: Still Failing > Timestamp: 20210131T060528Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai > ners/20210131T060528Z/logs > ---------------------------------------------------------------------- > ---------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > ------------------------------ > > End of Starlingx-discuss Digest, Vol 32, Issue 126 > ************************************************** From scott.little at windriver.com Thu Feb 11 04:00:35 2021 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Feb 2021 23:00:35 -0500 Subject: [Starlingx-discuss] STX_build_docker_flock_images - Build failed In-Reply-To: References: <9584c983-d836-8e78-a92e-987aa531205a@windriver.com> Message-ID: I don't understand your question. What does that update to ceph have to do with your stx-keystone-api-proxy build issue? That update merged Jan 18, but I see CENGN successfully built and pushed all images, including stx-keystone-api-proxy, on Feb 09. Scott On 2021-02-10 10:47 p.m., Chen, Haochuan Z wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Thanks! What about we build image skip stx-keystone-api-proxy once. > > As I already merge this patch, but starlingx/ceph-manager image, could not be built and push to docker hub. As image build fail with stx-keystone-api-proxy and block this image build. > https://review.opendev.org/c/starlingx/utilities/+/760503 > > I wish we could finish all image build once by skip current build fail. > > Thanks! > > Martin, Chen > IOTG, Software Engineer > 021-61164330 > > -----Original Message----- > From: Scott Little > Sent: Wednesday, February 10, 2021 11:46 PM > To: Chen, Haochuan Z > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: STX_build_docker_flock_images - Build failed > > build-stx-images.sh supports a ' --skip ' argument if you wish to build all other images while excluding one that is problematic. > > Scott > > > On 2021-02-10 7:38 a.m., Chen, Haochuan Z wrote: >> [Please note: This e-mail is from an EXTERNAL e-mail address] >> >> HI scott >> >> Now image build fail with stx-keystone-api-proxy, what about skip >> build fail image, to go on other image build >> >> Thanks! >> >> Martin, Chen >> IOTG, Software Engineer >> 021-61164330 >> >> -----Original Message----- >> From: starlingx-discuss-request at lists.starlingx.io >> >> Sent: Sunday, January 31, 2021 2:22 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Starlingx-discuss Digest, Vol 32, Issue 126 >> >> Send Starlingx-discuss mailing list submissions to >> starlingx-discuss at lists.starlingx.io >> >> To subscribe or unsubscribe via the World Wide Web, visit >> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> or, via email, send a message with subject or body 'help' to >> starlingx-discuss-request at lists.starlingx.io >> >> You can reach the person managing the list at >> starlingx-discuss-owner at lists.starlingx.io >> >> When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." >> >> >> Today's Topics: >> >> 1. [stable] [build-report] STX_build_docker_flock_images - Build >> # 321 - Failure! (build.starlingx at gmail.com) >> 2. [stable] [build-report] master >> STX_build_docker_images_layered - Build # 81 - Still Failing! >> (build.starlingx at gmail.com) >> 3. [build-report] master >> STX_build_layer_containers_master_master - Build # 105 - Still >> Failing! (build.starlingx at gmail.com) >> 4. [stable] [build-report] master >> STX_build_docker_images_layered - Build # 82 - Still Failing! >> (build.starlingx at gmail.com) >> 5. [build-report] master >> STX_build_layer_containers_master_master - Build # 106 - Still >> Failing! (build.starlingx at gmail.com) >> 6. [stable] [build-report] master >> STX_build_docker_images_layered - Build # 83 - Still Failing! >> (build.starlingx at gmail.com) >> 7. [build-report] master >> STX_build_layer_containers_master_master - Build # 107 - Still >> Failing! (build.starlingx at gmail.com) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Sat, 30 Jan 2021 19:05:51 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [stable] [build-report] >> STX_build_docker_flock_images - Build # 321 - Failure! >> Message-ID: >> <472945709.51.1612051552281.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_docker_flock_images Build #: 321 >> Status: Failure >> Timestamp: 20210131T000529Z >> Branch: >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210130T233107Z/logs >> ---------------------------------------------------------------------- >> ---------- >> Parameters >> >> WEB_HOST: mirror.starlingx.cengn.ca >> MY_WORKSPACE: >> /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z >> OS: centos >> MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root >> BASE_VERSION: master-stable-20210130T233107Z >> PUBLISH_LOGS_URL: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210130T233107Z/logs >> REGISTRY_USERID: slittlewrs >> LATEST_PREFIX: master >> PUBLISH_LOGS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/log >> s >> PUBLISH_TIMESTAMP: 20210130T233107Z >> FLOCK_VERSION: master-centos-stable-20210130T233107Z >> WEB_HOST_PORT: 80 >> PREFIX: master >> TIMESTAMP: 20210130T233107Z >> BUILD_STREAM: stable >> REGISTRY_ORG: starlingx >> PUBLISH_OUTPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/out >> puts >> REGISTRY: docker.io >> >> ------------------------------ >> >> Message: 2 >> Date: Sat, 30 Jan 2021 19:05:53 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [stable] [build-report] master >> STX_build_docker_images_layered - Build # 81 - Still Failing! >> Message-ID: >> <1654307004.54.1612051554528.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_docker_images_layered Build #: 81 >> Status: Still Failing >> Timestamp: 20210130T234712Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210130T233107Z/logs >> ---------------------------------------------------------------------- >> ---------- >> Parameters >> >> BRANCH: master >> MY_WORKSPACE: >> /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z >> OS: centos >> MUNGED_BRANCH: master >> MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root >> PUBLISH_LOGS_URL: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210130T233107Z/logs >> MASTER_BUILD_NUMBER: 105 >> PUBLISH_LOGS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/log >> s >> MASTER_JOB_NAME: STX_build_layer_containers_master_master >> MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers >> PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers >> PUBLISH_TIMESTAMP: 20210130T233107Z >> DOCKER_BUILD_ID: jenkins-master-containers-20210130T233107Z-builder >> TIMESTAMP: 20210130T233107Z >> OS_VERSION: 7.5.1804 >> BUILD_STREAM: stable >> PUBLISH_INPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/inp >> uts >> LAYER: containers >> PUBLISH_OUTPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/out >> puts >> >> ------------------------------ >> >> Message: 3 >> Date: Sat, 30 Jan 2021 19:05:56 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [build-report] master >> STX_build_layer_containers_master_master - Build # 105 - Still >> Failing! >> Message-ID: >> <1081325300.57.1612051556649.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_layer_containers_master_master >> Build #: 105 >> Status: Still Failing >> Timestamp: 20210130T233107Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210130T233107Z/logs >> ---------------------------------------------------------------------- >> ---------- >> Parameters >> >> BUILD_CONTAINERS_DEV: false >> BUILD_CONTAINERS_STABLE: true >> FORCE_BUILD: true >> >> ------------------------------ >> >> Message: 4 >> Date: Sun, 31 Jan 2021 01:03:45 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [stable] [build-report] master >> STX_build_docker_images_layered - Build # 82 - Still Failing! >> Message-ID: >> <1063712408.60.1612073026255.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_docker_images_layered Build #: 82 >> Status: Still Failing >> Timestamp: 20210131T052236Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210131T050619Z/logs >> ---------------------------------------------------------------------- >> ---------- >> Parameters >> >> BRANCH: master >> MY_WORKSPACE: >> /localdisk/loadbuild/jenkins/master-containers/20210131T050619Z >> OS: centos >> MUNGED_BRANCH: master >> MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root >> PUBLISH_LOGS_URL: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210131T050619Z/logs >> MASTER_BUILD_NUMBER: 106 >> PUBLISH_LOGS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T050619Z/log >> s >> MASTER_JOB_NAME: STX_build_layer_containers_master_master >> MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers >> PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers >> PUBLISH_TIMESTAMP: 20210131T050619Z >> DOCKER_BUILD_ID: jenkins-master-containers-20210131T050619Z-builder >> TIMESTAMP: 20210131T050619Z >> OS_VERSION: 7.5.1804 >> BUILD_STREAM: stable >> PUBLISH_INPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T050619Z/inp >> uts >> LAYER: containers >> PUBLISH_OUTPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T050619Z/out >> puts >> >> ------------------------------ >> >> Message: 5 >> Date: Sun, 31 Jan 2021 01:03:48 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [build-report] master >> STX_build_layer_containers_master_master - Build # 106 - Still >> Failing! >> Message-ID: >> <470021719.63.1612073028646.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_layer_containers_master_master >> Build #: 106 >> Status: Still Failing >> Timestamp: 20210131T050619Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210131T050619Z/logs >> ---------------------------------------------------------------------- >> ---------- >> Parameters >> >> BUILD_CONTAINERS_DEV: false >> BUILD_CONTAINERS_STABLE: true >> FORCE_BUILD: true >> >> ------------------------------ >> >> Message: 6 >> Date: Sun, 31 Jan 2021 01:21:38 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [stable] [build-report] master >> STX_build_docker_images_layered - Build # 83 - Still Failing! >> Message-ID: >> <653138737.66.1612074099171.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_docker_images_layered Build #: 83 >> Status: Still Failing >> Timestamp: 20210131T062137Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210131T060528Z/logs >> ---------------------------------------------------------------------- >> ---------- >> Parameters >> >> BRANCH: master >> MY_WORKSPACE: >> /localdisk/loadbuild/jenkins/master-containers/20210131T060528Z >> OS: centos >> MUNGED_BRANCH: master >> MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root >> PUBLISH_LOGS_URL: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210131T060528Z/logs >> MASTER_BUILD_NUMBER: 107 >> PUBLISH_LOGS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T060528Z/log >> s >> MASTER_JOB_NAME: STX_build_layer_containers_master_master >> MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers >> PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers >> PUBLISH_TIMESTAMP: 20210131T060528Z >> DOCKER_BUILD_ID: jenkins-master-containers-20210131T060528Z-builder >> TIMESTAMP: 20210131T060528Z >> OS_VERSION: 7.5.1804 >> BUILD_STREAM: stable >> PUBLISH_INPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T060528Z/inp >> uts >> LAYER: containers >> PUBLISH_OUTPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T060528Z/out >> puts >> >> ------------------------------ >> >> Message: 7 >> Date: Sun, 31 Jan 2021 01:21:40 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [build-report] master >> STX_build_layer_containers_master_master - Build # 107 - Still >> Failing! >> Message-ID: <6419772.69.1612074101230.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_layer_containers_master_master >> Build #: 107 >> Status: Still Failing >> Timestamp: 20210131T060528Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/contai >> ners/20210131T060528Z/logs >> ---------------------------------------------------------------------- >> ---------- >> Parameters >> >> BUILD_CONTAINERS_DEV: false >> BUILD_CONTAINERS_STABLE: true >> FORCE_BUILD: true >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> ------------------------------ >> >> End of Starlingx-discuss Digest, Vol 32, Issue 126 >> ************************************************** From maryx.camp at intel.com Thu Feb 11 04:02:42 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 11 Feb 2021 04:02:42 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 10-Feb-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 10-Feb-21 All -- reviews merged since last meeting: 3 All -- bug status -- 17 total - team agrees to defer all low priority LP until the upstreaming effort is completed. 13 LP are WIP against API documentation, which is generated from source code (low priority). Those reviews are here: https://review.opendev.org/#/q/project:starlingx/config Status/questions/opens How will upstream bugs be assigned? When Launchpads are submitted, Mary will triage them, then bring to this meeting for disposition and assignment. New LP submitted by community member: Documentation needs to be updated for LAG and VLAN type interfaces [https://bugs.launchpad.net/starlingx/+bug/1915285] Installation guides we think have a general note/disclaimer about describing a basic configuration. For this LP, we could add a pointer to Node Management Aggregate info [https://docs.starlingx.io/node_management/starlingx-kubernetes/node_interfaces/configuring-aggregated-ethernet-interfaces-using-the-cli.html] For now, we can add more pointers to the install guides to upstreamed docs for special cases like this. AR Mary follow up. New review submitted for STX R5 - Edge Worker for Industrial Deployments: https://review.opendev.org/c/starlingx/docs/+/774595 New section in upstreamed guide about Deployment. Ron says we can delete index.rs1 files after docs are merged/upstreamed. Bruce suggests a glossary/acronym list. We will think about implementing this using the existing abbreviations file. Mary nominates Ron Stone as additional Core Reviewer for Docs - all were in favor. Ron was added to the starlingx-docs-core group. Ron & Juanita were added to the Docs & Infra team members in the wiki: https://wiki.openstack.org/wiki/StarlingX/Docs_and_Infra R5 release notes - DISCUSSION DEFERRED TO NEXT MEETING From maryx.camp at intel.com Thu Feb 11 04:12:18 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 11 Feb 2021 04:12:18 +0000 Subject: [Starlingx-discuss] [STARLINGX 4.0] APPLICATION UPLOAD GETTING FAILED - with LAG Interface In-Reply-To: References: Message-ID: Hi Lokendra, Anirudh, and team, Thanks for submitting the STX Documentation launchpad for your issue: https://bugs.launchpad.net/starlingx/+bug/1915285 We discussed it at our Docs meeting and wanted to point you to the Node Management Guide for more info. Here are some relevant sections that may be helpful: https://docs.starlingx.io/node_management/starlingx-kubernetes/node_interfaces/interface-provisioning.html https://docs.starlingx.io/node_management/starlingx-kubernetes/node_interfaces/link-aggregation-settings.html https://docs.starlingx.io/node_management/starlingx-kubernetes/node_interfaces/configuring-aggregated-ethernet-interfaces-using-the-cli.html Good luck, Mary Camp Kelly Services Technical Writer | maryx.camp at intel.com From: Lokendra Singh Rathour Sent: Monday, February 8, 2021 11:49 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [STARLINGX 4.0] APPLICATION UPLOAD GETTING FAILED - with LAG Interface Hi Team, Any update with respect the Query raised!! Best Regards, Lokendra From: Lokendra Singh Rathour Sent: Friday, February 5, 2021 5:27 PM To: starlingx-discuss at lists.starlingx.io Subject: [STARLINGX 4.0] APPLICATION UPLOAD GETTING FAILED - with LAG Interface Hello Team, We are trying to setup dedicated Storage setup using StarlingX 4.0 over which we have certain observations/Error during the time of worker Node Configuration for the LAG type Data interfaces. Though we have tweaked the procedure a bit and successfully unlocked the Worker Nodes,we are facing the error in upload the STX Application packages. Steps as followed: STEP 1: We have created bonds for Data Network on Worker Node(reference document: https://docs.starlingx.io/configuration/host_interface_network_config.html#:~:text=When%20a%20host%20is%20added,system%20host%2Dif%2Dadd.) * system host-if-add worker-1 -m 1500 -a active_standby data1bond ae eth1 eth2 * system host-if-add worker-1 -m 1500 -a active_standby data2bond ae eth3 eth4 STEP 2: Then further as per the main documents when Configuring data interfaces for worker nodes ( reference document: https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_install_kubernetes.html) * SPL=/tmp/tmp-system-port-list * SPIL=/tmp/tmp-system-host-if-list Above files does not have the information of the created Bond interfaces for data Nodes, Further in order to get the the value of ${DATA0IFUUID} and ${DATA1IFUUID} below mentioned commands are executed: DATA0IF= DATA1IF= PHYSNET0='physnet0' PHYSNET1='physnet1' SPL=/tmp/tmp-system-port-list SPIL=/tmp/tmp-system-host-if-list # configure the datanetworks in sysinv, prior to referencing it # in the ``system host-if-modify`` command'. system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET1} vlan for NODE in worker-0 worker-1; do echo "Configuring interface for: $NODE" set -ex system host-port-list ${NODE} --nowrap > ${SPL} system host-if-list -a ${NODE} --nowrap > ${SPIL} DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} set +ex done But in our case these values in the variable(${DATA0IFUUID} and ${DATA1IFUUID}) were not getting populated, therefore we have used the UUID received from command : System host-if-list worker-0 System host-if-list worker-1 We observed that UUID of any ethernet interface mentioned in the file (SPIL=/tmp/tmp-system-host-if-list ) matches with the UUID obtained by running the above command(System host-if-list worker-0 ) , so as we did not get the UUID from the file in case LAG Bonds we passed the Values of UUID directly. STEP 3: Using the change, we were able to successfully unlock the worker Nodes. [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | storage-0 | storage | unlocked | enabled | available | | 4 | storage-1 | storage | unlocked | enabled | available | | 5 | worker-0 | worker | unlocked | enabled | available | | 6 | worker-1 | worker | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ STEP 4: Then Further while running the application-upload: system application-upload stx-openstack--centos-stable-versioned.tgz Error as below was seen Error: 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/usr/lib64/python2.7/site-packages/sysinv/helm/helm.py", line 317, in _get_helm_chart_overrides 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app cnamespace)) 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 45, in get_overrides 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app 'hosts': self._get_per_host_overrides() 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 105, in _get_per_host_overrides 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app 'auto_bridge_add': self._get_host_bridges(host)}) 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 141, in _get_host_bridges 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app port_name = self._get_interface_port_name(iface) 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", line 280, in _get_interface_port_name 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app assert iface['iftype'] == constants.INTERFACE_TYPE_ETHERNET 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app AssertionError 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app Kindly check and help in advising the way forward here w.r.t. as 1. does starlingx support for the bond interface for the Data Networks ? * if yes then do we have any supported document w.r.t the same. Reference Documents: For Deployment: https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_install_kubernetes.html For LAG : : https://docs.starlingx.io/configuration/host_interface_network_config.html#:~:text=When%20a%20host%20is%20added,system%20host%2Dif%2Dadd DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Thu Feb 11 04:13:26 2021 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Thu, 11 Feb 2021 04:13:26 +0000 Subject: [Starlingx-discuss] STX_build_docker_flock_images - Build failed In-Reply-To: References: <9584c983-d836-8e78-a92e-987aa531205a@windriver.com> Message-ID: Thanks, check again and already find my image on docker hub. https://hub.docker.com/r/starlingx/stx-ceph-manager BR! Martin, Chen IOTG, Software Engineer 021-61164330 -----Original Message----- From: Scott Little Sent: Thursday, February 11, 2021 12:01 PM To: Chen, Haochuan Z Cc: starlingx-discuss at lists.starlingx.io Subject: Re: STX_build_docker_flock_images - Build failed I don't understand your question. What does that update to ceph have to do with your stx-keystone-api-proxy build issue? That update merged Jan 18, but I see CENGN successfully built and pushed all images, including stx-keystone-api-proxy, on Feb 09. Scott On 2021-02-10 10:47 p.m., Chen, Haochuan Z wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Thanks! What about we build image skip stx-keystone-api-proxy once. > > As I already merge this patch, but starlingx/ceph-manager image, could not be built and push to docker hub. As image build fail with stx-keystone-api-proxy and block this image build. > https://review.opendev.org/c/starlingx/utilities/+/760503 > > I wish we could finish all image build once by skip current build fail. > > Thanks! > > Martin, Chen > IOTG, Software Engineer > 021-61164330 > > -----Original Message----- > From: Scott Little > Sent: Wednesday, February 10, 2021 11:46 PM > To: Chen, Haochuan Z > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: STX_build_docker_flock_images - Build failed > > build-stx-images.sh supports a ' --skip ' argument if you wish to build all other images while excluding one that is problematic. > > Scott > > > On 2021-02-10 7:38 a.m., Chen, Haochuan Z wrote: >> [Please note: This e-mail is from an EXTERNAL e-mail address] >> >> HI scott >> >> Now image build fail with stx-keystone-api-proxy, what about skip >> build fail image, to go on other image build >> >> Thanks! >> >> Martin, Chen >> IOTG, Software Engineer >> 021-61164330 >> >> -----Original Message----- >> From: starlingx-discuss-request at lists.starlingx.io >> >> Sent: Sunday, January 31, 2021 2:22 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Starlingx-discuss Digest, Vol 32, Issue 126 >> >> Send Starlingx-discuss mailing list submissions to >> starlingx-discuss at lists.starlingx.io >> >> To subscribe or unsubscribe via the World Wide Web, visit >> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> or, via email, send a message with subject or body 'help' to >> starlingx-discuss-request at lists.starlingx.io >> >> You can reach the person managing the list at >> starlingx-discuss-owner at lists.starlingx.io >> >> When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." >> >> >> Today's Topics: >> >> 1. [stable] [build-report] STX_build_docker_flock_images - Build >> # 321 - Failure! (build.starlingx at gmail.com) >> 2. [stable] [build-report] master >> STX_build_docker_images_layered - Build # 81 - Still Failing! >> (build.starlingx at gmail.com) >> 3. [build-report] master >> STX_build_layer_containers_master_master - Build # 105 - Still >> Failing! (build.starlingx at gmail.com) >> 4. [stable] [build-report] master >> STX_build_docker_images_layered - Build # 82 - Still Failing! >> (build.starlingx at gmail.com) >> 5. [build-report] master >> STX_build_layer_containers_master_master - Build # 106 - Still >> Failing! (build.starlingx at gmail.com) >> 6. [stable] [build-report] master >> STX_build_docker_images_layered - Build # 83 - Still Failing! >> (build.starlingx at gmail.com) >> 7. [build-report] master >> STX_build_layer_containers_master_master - Build # 107 - Still >> Failing! (build.starlingx at gmail.com) >> >> >> --------------------------------------------------------------------- >> - >> >> Message: 1 >> Date: Sat, 30 Jan 2021 19:05:51 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [stable] [build-report] >> STX_build_docker_flock_images - Build # 321 - Failure! >> Message-ID: >> >> <472945709.51.1612051552281.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_docker_flock_images Build #: 321 >> Status: Failure >> Timestamp: 20210131T000529Z >> Branch: >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210130T233107Z/logs >> --------------------------------------------------------------------- >> - >> ---------- >> Parameters >> >> WEB_HOST: mirror.starlingx.cengn.ca >> MY_WORKSPACE: >> /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z >> OS: centos >> MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root >> BASE_VERSION: master-stable-20210130T233107Z >> PUBLISH_LOGS_URL: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210130T233107Z/logs >> REGISTRY_USERID: slittlewrs >> LATEST_PREFIX: master >> PUBLISH_LOGS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/lo >> g >> s >> PUBLISH_TIMESTAMP: 20210130T233107Z >> FLOCK_VERSION: master-centos-stable-20210130T233107Z >> WEB_HOST_PORT: 80 >> PREFIX: master >> TIMESTAMP: 20210130T233107Z >> BUILD_STREAM: stable >> REGISTRY_ORG: starlingx >> PUBLISH_OUTPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/ou >> t >> puts >> REGISTRY: docker.io >> >> ------------------------------ >> >> Message: 2 >> Date: Sat, 30 Jan 2021 19:05:53 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [stable] [build-report] master >> STX_build_docker_images_layered - Build # 81 - Still Failing! >> Message-ID: >> >> <1654307004.54.1612051554528.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_docker_images_layered Build #: 81 >> Status: Still Failing >> Timestamp: 20210130T234712Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210130T233107Z/logs >> --------------------------------------------------------------------- >> - >> ---------- >> Parameters >> >> BRANCH: master >> MY_WORKSPACE: >> /localdisk/loadbuild/jenkins/master-containers/20210130T233107Z >> OS: centos >> MUNGED_BRANCH: master >> MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root >> PUBLISH_LOGS_URL: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210130T233107Z/logs >> MASTER_BUILD_NUMBER: 105 >> PUBLISH_LOGS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/lo >> g >> s >> MASTER_JOB_NAME: STX_build_layer_containers_master_master >> MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers >> PUBLISH_DISTRO_BASE: >> /export/mirror/starlingx/master/centos/containers >> PUBLISH_TIMESTAMP: 20210130T233107Z >> DOCKER_BUILD_ID: jenkins-master-containers-20210130T233107Z-builder >> TIMESTAMP: 20210130T233107Z >> OS_VERSION: 7.5.1804 >> BUILD_STREAM: stable >> PUBLISH_INPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/in >> p >> uts >> LAYER: containers >> PUBLISH_OUTPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210130T233107Z/ou >> t >> puts >> >> ------------------------------ >> >> Message: 3 >> Date: Sat, 30 Jan 2021 19:05:56 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [build-report] master >> STX_build_layer_containers_master_master - Build # 105 - Still >> Failing! >> Message-ID: >> >> <1081325300.57.1612051556649.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_layer_containers_master_master >> Build #: 105 >> Status: Still Failing >> Timestamp: 20210130T233107Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210130T233107Z/logs >> --------------------------------------------------------------------- >> - >> ---------- >> Parameters >> >> BUILD_CONTAINERS_DEV: false >> BUILD_CONTAINERS_STABLE: true >> FORCE_BUILD: true >> >> ------------------------------ >> >> Message: 4 >> Date: Sun, 31 Jan 2021 01:03:45 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [stable] [build-report] master >> STX_build_docker_images_layered - Build # 82 - Still Failing! >> Message-ID: >> >> <1063712408.60.1612073026255.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_docker_images_layered Build #: 82 >> Status: Still Failing >> Timestamp: 20210131T052236Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210131T050619Z/logs >> --------------------------------------------------------------------- >> - >> ---------- >> Parameters >> >> BRANCH: master >> MY_WORKSPACE: >> /localdisk/loadbuild/jenkins/master-containers/20210131T050619Z >> OS: centos >> MUNGED_BRANCH: master >> MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root >> PUBLISH_LOGS_URL: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210131T050619Z/logs >> MASTER_BUILD_NUMBER: 106 >> PUBLISH_LOGS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T050619Z/lo >> g >> s >> MASTER_JOB_NAME: STX_build_layer_containers_master_master >> MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers >> PUBLISH_DISTRO_BASE: >> /export/mirror/starlingx/master/centos/containers >> PUBLISH_TIMESTAMP: 20210131T050619Z >> DOCKER_BUILD_ID: jenkins-master-containers-20210131T050619Z-builder >> TIMESTAMP: 20210131T050619Z >> OS_VERSION: 7.5.1804 >> BUILD_STREAM: stable >> PUBLISH_INPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T050619Z/in >> p >> uts >> LAYER: containers >> PUBLISH_OUTPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T050619Z/ou >> t >> puts >> >> ------------------------------ >> >> Message: 5 >> Date: Sun, 31 Jan 2021 01:03:48 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [build-report] master >> STX_build_layer_containers_master_master - Build # 106 - Still >> Failing! >> Message-ID: >> >> <470021719.63.1612073028646.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_layer_containers_master_master >> Build #: 106 >> Status: Still Failing >> Timestamp: 20210131T050619Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210131T050619Z/logs >> --------------------------------------------------------------------- >> - >> ---------- >> Parameters >> >> BUILD_CONTAINERS_DEV: false >> BUILD_CONTAINERS_STABLE: true >> FORCE_BUILD: true >> >> ------------------------------ >> >> Message: 6 >> Date: Sun, 31 Jan 2021 01:21:38 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [stable] [build-report] master >> STX_build_docker_images_layered - Build # 83 - Still Failing! >> Message-ID: >> >> <653138737.66.1612074099171.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_docker_images_layered Build #: 83 >> Status: Still Failing >> Timestamp: 20210131T062137Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210131T060528Z/logs >> --------------------------------------------------------------------- >> - >> ---------- >> Parameters >> >> BRANCH: master >> MY_WORKSPACE: >> /localdisk/loadbuild/jenkins/master-containers/20210131T060528Z >> OS: centos >> MUNGED_BRANCH: master >> MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root >> PUBLISH_LOGS_URL: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210131T060528Z/logs >> MASTER_BUILD_NUMBER: 107 >> PUBLISH_LOGS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T060528Z/lo >> g >> s >> MASTER_JOB_NAME: STX_build_layer_containers_master_master >> MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers >> PUBLISH_DISTRO_BASE: >> /export/mirror/starlingx/master/centos/containers >> PUBLISH_TIMESTAMP: 20210131T060528Z >> DOCKER_BUILD_ID: jenkins-master-containers-20210131T060528Z-builder >> TIMESTAMP: 20210131T060528Z >> OS_VERSION: 7.5.1804 >> BUILD_STREAM: stable >> PUBLISH_INPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T060528Z/in >> p >> uts >> LAYER: containers >> PUBLISH_OUTPUTS_BASE: >> /export/mirror/starlingx/master/centos/containers/20210131T060528Z/ou >> t >> puts >> >> ------------------------------ >> >> Message: 7 >> Date: Sun, 31 Jan 2021 01:21:40 -0500 (EST) >> From: build.starlingx at gmail.com >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [build-report] master >> STX_build_layer_containers_master_master - Build # 107 - Still >> Failing! >> Message-ID: >> <6419772.69.1612074101230.JavaMail.javamailuser at localhost> >> Content-Type: text/plain; charset="utf-8" >> >> Project: STX_build_layer_containers_master_master >> Build #: 107 >> Status: Still Failing >> Timestamp: 20210131T060528Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/conta >> i >> ners/20210131T060528Z/logs >> --------------------------------------------------------------------- >> - >> ---------- >> Parameters >> >> BUILD_CONTAINERS_DEV: false >> BUILD_CONTAINERS_STABLE: true >> FORCE_BUILD: true >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> ------------------------------ >> >> End of Starlingx-discuss Digest, Vol 32, Issue 126 >> ************************************************** From anyrude10 at gmail.com Thu Feb 11 07:30:49 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Thu, 11 Feb 2021 13:00:49 +0530 Subject: [Starlingx-discuss] [STARLINGX 4.0] APPLICATION UPLOAD GETTING FAILED - with LAG Interface In-Reply-To: References: Message-ID: Hi Mary, Thanks for sharing the documents. We had a discussion in the *Technical Steering Committee & Community Cal*l yesterday 10th February 2021 regarding this. *Greg Waines* suggested raising 2 Launchpad Bugs - one for S/w and other for the Docs. - https://bugs.launchpad.net/starlingx/+bug/1915285 - https://bugs.launchpad.net/starlingx/+bug/1915231 In case of Docs, we had found a workaround in which we found UUIDs of LAG Interfaces using the below command: - System host-if-list worker-0 - System host-if-list worker-1 The UUIDs were directly supplied in the commands below: - system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} - system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} - system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} - system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} After this we were able to successfully unlock the worker Nodes and all the nodes were available (as shown below) [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ |* id | hostname | personality | administrative | operational | availability | *+----+--------------+-------------+----------------+-------------+--------------+ |* 1 | controller-0 | controller | unlocked | enabled | available | *|* 2 | controller-1 | controller | unlocked | enabled | available | *|* 3 | storage-0 | storage | unlocked | enabled | available | *|* 4 | storage-1 | storage | unlocked | enabled | available | *|* 5 | worker-0 | worker | unlocked | enabled | available | *|* 6 | worker-1 | worker | unlocked | enabled | available | *+----+--------------+-------------+----------------+-------------+--------------+ Next we were facing an issue in Uploading of Openstack STX Application http://mirror.starlingx.cengn.ca/mirror/starlingx/release/4.0.1/centos/flock/outputs/helm-charts/stx-openstack-1.0-49-centos-stable-versioned.tgz For this we have raised a separate bug in which STX is expecting the data network type to be only *constants.INTERFACE_TYPE_ETHERNET* - https://bugs.launchpad.net/starlingx/+bug/1915231 Request you to please provide your feedback any tentative workaround to resolve the issue of upload failure of stx openstack tar Regards Anirudh Gupta On Thu, Feb 11, 2021 at 9:42 AM Camp, MaryX wrote: > Hi Lokendra, Anirudh, and team, > > Thanks for submitting the STX Documentation launchpad for your issue: > https://bugs.launchpad.net/starlingx/+bug/1915285 > > > > We discussed it at our Docs meeting and wanted to point you to the Node > Management Guide for more info. > > Here are some relevant sections that may be helpful: > > > https://docs.starlingx.io/node_management/starlingx-kubernetes/node_interfaces/interface-provisioning.html > > > https://docs.starlingx.io/node_management/starlingx-kubernetes/node_interfaces/link-aggregation-settings.html > > > https://docs.starlingx.io/node_management/starlingx-kubernetes/node_interfaces/configuring-aggregated-ethernet-interfaces-using-the-cli.html > > > > Good luck, > > Mary Camp > > Kelly Services Technical Writer | maryx.camp at intel.com > > > > *From:* Lokendra Singh Rathour > *Sent:* Monday, February 8, 2021 11:49 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [STARLINGX 4.0] APPLICATION UPLOAD > GETTING FAILED - with LAG Interface > > > > Hi Team, > > Any update with respect the Query raised!! > > > > Best Regards, > > Lokendra > > > > > > *From:* Lokendra Singh Rathour > *Sent:* Friday, February 5, 2021 5:27 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [STARLINGX 4.0] APPLICATION UPLOAD GETTING FAILED - with LAG > Interface > > > > Hello Team, > > We are trying to setup dedicated Storage setup using StarlingX 4.0 over > which we have certain observations/Error during the time of worker Node > Configuration for the LAG type Data interfaces. Though we have tweaked the > procedure a bit and successfully unlocked the Worker Nodes,we are facing > the error in upload the STX Application packages. Steps as followed: > > > > *STEP 1:* > > We have created bonds for Data Network on Worker Node(reference document: > https://docs.starlingx.io/configuration/host_interface_network_config.html#:~:text=When%20a%20host%20is%20added,system%20host%2Dif%2Dadd > .) > > - system host-if-add worker-1 -m 1500 -a active_standby data1bond ae > eth1 eth2 > - system host-if-add worker-1 -m 1500 -a active_standby data2bond > ae eth3 eth4 > > > > *STEP 2:* > > Then further as per the main documents when Configuring data interfaces > for worker nodes ( reference document: > https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_install_kubernetes.html > ) > > > > - *SPL=/tmp/tmp-system-port-list* > - *SPIL=/tmp/tmp-system-host-if-list* > > > > > > Above files does not have the information of the created Bond interfaces > for data Nodes, > > > > Further in order to ge*t the the value of ${DATA0IFUUID} and > ${DATA1IFUUID} below mentioned commands are executed:* > > > > *DATA0IF=* > > *DATA1IF=* > > *PHYSNET0='physnet0'* > > *PHYSNET1='physnet1'* > > *SPL=/tmp/tmp-system-port-list* > > *SPIL=/tmp/tmp-system-host-if-list* > > > > *# configure the datanetworks in sysinv, prior to referencing it* > > *# in the ``system host-if-modify`` command'.* > > *system datanetwork-add ${PHYSNET0} vlan* > > *system datanetwork-add ${PHYSNET1} vlan* > > > > *for NODE in worker-0 worker-1; do* > > * echo "Configuring interface for: $NODE"* > > * set -ex* > > * system host-port-list ${NODE} --nowrap > ${SPL}* > > * system host-if-list -a ${NODE} --nowrap > ${SPIL}* > > * DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')* > > * DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')* > > * DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')* > > * DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')* > > * DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')* > > * DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')* > > * DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ > DATA0PORTNAME) {print $2}')* > > * DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ > DATA1PORTNAME) {print $2}')* > > * system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}* > > * system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}* > > * system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}* > > * system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}* > > * set +ex* > > *done* > > > > But in our case these values in the variable(*${DATA0IFUUID} and > ${DATA1IFUUID}*) were not getting populated, therefore we have used the > UUID received from command : > > System host-if-list worker-0 > > System host-if-list worker-1 > > > > We observed that UUID of any ethernet interface mentioned in the file ( > *SPIL=/tmp/tmp-system-host-if-list* > > ) matches with the UUID obtained by running the above command(System > host-if-list worker-0 ) , so as we did not get the UUID from the file in > case LAG Bonds we passed the Values of UUID directly. > > > > > > *STEP 3:* > > > > Using the change, we were able to successfully unlock the worker Nodes. > > > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | id | hostname | personality | administrative | operational | > availability | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | 1 | controller-0 | controller | unlocked | enabled | > available | > > | 2 | controller-1 | controller | unlocked | enabled | > available | > > | 3 | storage-0 | storage | unlocked | enabled | > available | > > | 4 | storage-1 | storage | unlocked | enabled | > available | > > | 5 | worker-0 | worker | unlocked | enabled | > available | > > | 6 | worker-1 | worker | unlocked | enabled | > available | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > > > *STEP 4:* > > Then Further while running the *application-upload: * > > *system application-upload > stx-openstack--centos-stable-versioned.tgz* > > > > *Error as below was seen* > > Error: > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File > "/usr/lib64/python2.7/site-packages/sysinv/helm/helm.py", line 317, in > _get_helm_chart_overrides > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app > cnamespace)) > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File > "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", > line 45, in get_overrides > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app > 'hosts': self._get_per_host_overrides() > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File > "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", > line 105, in _get_per_host_overrides > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app > 'auto_bridge_add': self._get_host_bridges(host)}) > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File > "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", > line 141, in _get_host_bridges > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app > port_name = self._get_interface_port_name(iface) > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app File > "/opt/platform/helm/20.06/stx-openstack/1.0-49-centos-stable-versioned/plugins/k8sapp_openstack/helm/neutron.py", > line 280, in _get_interface_port_name > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app assert > iface['iftype'] == constants.INTERFACE_TYPE_ETHERNET > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app > AssertionError > > 2021-02-05 07:46:40.318 113059 ERROR sysinv.conductor.kube_app > > > > > > > > *Kindly check and help in advising the way forward here w.r.t. as* > > 1. *does starlingx support for the bond interface for the Data > Networks ?* > 1. *if yes then do we have any supported document w.r.t the same. * > > > > > > > > Reference Documents: > > For Deployment: > https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_install_kubernetes.html > > For LAG : : > https://docs.starlingx.io/configuration/host_interface_network_config.html#:~:text=When%20a%20host%20is%20added,system%20host%2Dif%2Dadd > > DISCLAIMER: This electronic message and all of its contents, contains > information which is privileged, confidential or otherwise protected from > disclosure. The information contained in this electronic mail transmission > is intended for use only by the individual or entity to which it is > addressed. If you are not the intended recipient or may have received this > electronic mail transmission in error, please notify the sender immediately > and delete / destroy all copies of this electronic mail transmission > without disclosing, copying, distributing, forwarding, printing or > retaining any part of it. Hughes Systique accepts no responsibility for > loss or damage arising from the use of the information transmitted by this > email including damage from virus. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Thu Feb 11 08:42:40 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 11 Feb 2021 08:42:40 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210210T023311Z Message-ID: Sanity Test from 2021-February-10 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210210T023311Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210210T023311Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From openinfradn at gmail.com Thu Feb 11 16:39:51 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 11 Feb 2021 22:09:51 +0530 Subject: [Starlingx-discuss] Internal endpoint not found In-Reply-To: References: Message-ID: Hi, I did a fresh installation. On Wed, Feb 10, 2021 at 6:15 AM Sun, Austin wrote: > Hi > > 1) Have you check if openstack application is applied successfully via > “system application-list" Command ? > Yes, OpenStack application is updated and applied. $ system application-list +--------------------------+-----------------------------+-----------------------------------+----------------------------------------+----------+-----------+ | application | version | manifest name | manifest file | status | progress | +--------------------------+-----------------------------+-----------------------------------+----------------------------------------+----------+-----------+ | cert-manager | 1.0-6 | cert-manager-manifest | certmanager-manifest.yaml | applied | completed | | nginx-ingress-controller | 1.0-0 | nginx-ingress-controller-manifest | nginx_ingress_controller_manifest.yaml | applied | completed | | oidc-auth-apps | 1.0-28 | oidc-auth-manifest | manifest.yaml | uploaded | completed | | platform-integ-apps | 1.0-10 | platform-integration-manifest | manifest.yaml | applied | completed | | stx-openstack | 1.0-49-centos-stable-latest | armada-manifest | stx-openstack.yaml | uploaded | completed | +--------------------------+-----------------------------+-----------------------------------+----------------------------------------+----------+-----------+ > 2) are you following [1] to configure helm endpoint domain? > I did not used speficic endpoint for helm. > > Please be noticed: > > > > “This command also changes the containerized OpenStack Horizon to listen > on horizon.my-starlingx-domain.my-company.com:80 instead of the initial > :31000.” > > “You must configure { ‘*.my-starlingx-domain.my-company.com’: –> oam‐ > floating‐ip‐address } in the external DNS server that owns > my-company.com.” > > > > [1] > https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#configure-helm-endpoint-domain > I was able to generate a keystone token using the following command. curl -i \ -H "Content-Type: application/json" \ -d ' { "auth": { "identity": { "methods": ["password"], "password": { "user": { "name": "admin", "domain": { "id": "default" }, "password": "XXXXXX" } } } } }' \ "http://192.168.204.1:5000/v3/auth/tokens" ; echo But could not retrieve nova flavors curl -i http://192.168.204.1:80/v2.1/flavors -X GET -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token:${TOKEN}" | tail -1 | python -m json.tool I still can not access the Openstack dashboard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Fri Feb 12 03:04:27 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Feb 2021 22:04:27 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1539 - Failure! Message-ID: <1109595528.113.1613099068907.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1539 Status: Failure Timestamp: 20210212T024600Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210212T023117Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210212T023117Z DOCKER_BUILD_ID: jenkins-master-distro-20210212T023117Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210212T023117Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210212T023117Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/distro From alexandru.dimofte at intel.com Fri Feb 12 08:41:57 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 12 Feb 2021 08:41:57 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210211T023342Z Message-ID: Sanity Test from 2021-February-11 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210211T023342Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210211T023342Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From openinfradn at gmail.com Fri Feb 12 16:03:49 2021 From: openinfradn at gmail.com (open infra) Date: Fri, 12 Feb 2021 21:33:49 +0530 Subject: [Starlingx-discuss] Internal endpoint not found In-Reply-To: References: Message-ID: Hi, Finally, all three dashboards are working. I was accessing OpenStack dashboard via https, but it's working for http. I get the following error when I try to retrieve volume list, orchestration service list, flavor list, image list, etc. internal endpoint for compute service in RegionOne region not found Do I need to enable services and create endpoints manually? On Thu, Feb 11, 2021 at 10:09 PM open infra wrote: > Hi, > > I did a fresh installation. > > On Wed, Feb 10, 2021 at 6:15 AM Sun, Austin wrote: > >> Hi >> >> 1) Have you check if openstack application is applied successfully via >> “system application-list" Command ? >> > > Yes, OpenStack application is updated and applied. > > > $ system application-list > > +--------------------------+-----------------------------+-----------------------------------+----------------------------------------+----------+-----------+ > | application | version | manifest name > | manifest file | status | > progress | > > +--------------------------+-----------------------------+-----------------------------------+----------------------------------------+----------+-----------+ > | cert-manager | 1.0-6 | > cert-manager-manifest | certmanager-manifest.yaml > | applied | completed | > | nginx-ingress-controller | 1.0-0 | > nginx-ingress-controller-manifest | nginx_ingress_controller_manifest.yaml > | applied | completed | > | oidc-auth-apps | 1.0-28 | > oidc-auth-manifest | manifest.yaml > | uploaded | completed | > | platform-integ-apps | 1.0-10 | > platform-integration-manifest | manifest.yaml > | applied | completed | > | stx-openstack | 1.0-49-centos-stable-latest | armada-manifest > | stx-openstack.yaml | uploaded | > completed | > > +--------------------------+-----------------------------+-----------------------------------+----------------------------------------+----------+-----------+ > > >> 2) are you following [1] to configure helm endpoint domain? >> > > I did not used speficic endpoint for helm. > > > >> >> Please be noticed: >> >> >> >> “This command also changes the containerized OpenStack Horizon to listen >> on horizon.my-starlingx-domain.my-company.com:80 instead of the initial >> :31000.” >> >> “You must configure { ‘*.my-starlingx-domain.my-company.com’: –> oam‐ >> floating‐ip‐address } in the external DNS server that owns >> my-company.com.” >> >> >> >> [1] >> https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#configure-helm-endpoint-domain >> > > > > I was able to generate a keystone token using the following command. > > curl -i \ > -H "Content-Type: application/json" \ > -d ' > { "auth": { > "identity": { > "methods": ["password"], > "password": { > "user": { > "name": "admin", > "domain": { "id": "default" }, > "password": "XXXXXX" > } > } > } > } > }' \ > "http://192.168.204.1:5000/v3/auth/tokens" ; echo > > But could not retrieve nova flavors > > curl -i http://192.168.204.1:80/v2.1/flavors -X GET -H "Content-Type: > application/json" -H "Accept: application/json" -H "X-Auth-Token:${TOKEN}" > | tail -1 | python -m json.tool > > I still can not access the Openstack dashboard. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Sat Feb 13 08:09:22 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sat, 13 Feb 2021 08:09:22 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210212T030407Z Message-ID: Sanity Test from 2021-February-12 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210212T030407Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210212T030407Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 8408 bytes Desc: image003.png URL: From alexandru.dimofte at intel.com Sat Feb 13 08:18:13 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sat, 13 Feb 2021 08:18:13 +0000 Subject: [Starlingx-discuss] Cengn mirror server seems to be offline Message-ID: Hello guys, I observed that http://mirror.starlingx.cengn.ca/mirror/starlingx server is offline. Thanks! BR, Alex [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2021-02-13 101303.jpg Type: image/jpeg Size: 100197 bytes Desc: Screenshot 2021-02-13 101303.jpg URL: From Ankush.Rai at commscope.com Sun Feb 14 08:26:45 2021 From: Ankush.Rai at commscope.com (Rai, Ankush) Date: Sun, 14 Feb 2021 08:26:45 +0000 Subject: [Starlingx-discuss] Alarm "Memory threshold" Message-ID: Hi, Below alarm is getting raised for every node of the central and edge cloud. "Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%" It looks to be the false alarm as nodes are having enough available memory. Please config the root cause of this alarm. Thanks, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Sun Feb 14 09:12:13 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 14 Feb 2021 10:12:13 +0100 Subject: [Starlingx-discuss] Alarm "Memory threshold" In-Reply-To: References: Message-ID: <1821C2AB-A1F9-46C4-9905-623216984360@gmail.com> Hi Ankush, Do you have any log entries on the system you could share here that show the memory readings the alarm might be triggered by? Thanks, Ildikó > On Feb 14, 2021, at 09:26, Rai, Ankush wrote: > > Hi, > > Below alarm is getting raised for every node of the central and edge cloud. > > “Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%” > > It looks to be the false alarm as nodes are having enough available memory. Please config the root cause of this alarm. > > Thanks, > Ankush > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ankush.Rai at commscope.com Sun Feb 14 09:51:19 2021 From: Ankush.Rai at commscope.com (Rai, Ankush) Date: Sun, 14 Feb 2021 09:51:19 +0000 Subject: [Starlingx-discuss] Alarm "Memory threshold" In-Reply-To: <1821C2AB-A1F9-46C4-9905-623216984360@gmail.com> References: <1821C2AB-A1F9-46C4-9905-623216984360@gmail.com> Message-ID: Not sure exactly which log file to check. Captured some data here, please check if this can help. Software Version: 20.06 Memory: Reserved for Platform: 4600 MiB Usable Total: 13293 MiB Available: 13293 MiB The fm has logged these events. fm-event.log:2021-02-10T08:57:25.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 80.00%, actual 88.83%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "major", "state" : "set", "timestamp" : "2021-02-10 08:57:25.484131" } fm-event.log:2021-02-10T09:17:55.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "critical", "state" : "set", "timestamp" : "2021-02-10 09:17:55.482154" } cat /proc/meminfo MemTotal: 18323408 kB MemFree: 5322412 kB MemAvailable: 12068172 kB Buffers: 860596 kB Cached: 5966616 kB SwapCached: 0 kB Active: 8741484 kB Inactive: 3502072 kB Active(anon): 5156920 kB Inactive(anon): 34364 kB Active(file): 3584564 kB Inactive(file): 3467708 kB Unevictable: 5424 kB Mlocked: 5424 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 152 kB Writeback: 0 kB AnonPages: 5421708 kB Mapped: 940536 kB Shmem: 54940 kB KReclaimable: 267856 kB Slab: 492224 kB SReclaimable: 267856 kB SUnreclaim: 224368 kB KernelStack: 23488 kB PageTables: 67396 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 9161704 kB Committed_AS: 14837680 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 221048 kB DirectMap2M: 18653184 kB …………………………………………………………………………………………………………………………………………………… anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Base memory trampoline at [(____ptrval____)] 99000 size 24576 anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Early memory node ranges anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0x00000000-0x00000fff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0x000a0000-0x000effff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0x7ffde000-0x7fffffff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0x80000000-0xafffffff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0xb0000000-0xbfffffff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0xc0000000-0xfed1bfff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0xfed20000-0xfeffbfff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Memory: 2093848K/18873840K available (12292K kernel code, 1354K rwdata, 3652K rodata, 2120K init, 5364K bss, 461472K reserved, 0K cma-reserved) anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Freeing SMP alternatives memory: 28K anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Freeing initrd memory: 68228K anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Non-volatile memory driver v1.3 anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Freeing unused decrypted memory: 2040K anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Freeing unused kernel memory: 2120K anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Freeing unused kernel memory: 2020K anaconda/journal.log:Feb 10 06:20:03 localhost kernel: Freeing unused kernel memory: 444K anaconda/journal.log:Feb 10 06:20:20 localhost anaconda[3177]: check_memory(): total:18176, needed:320, graphical:410 anaconda/journal.log:Feb 10 06:20:22 localhost anaconda[3177]: check_memory(): total:18176, needed:320, graphical:410 anaconda/journal.log:Feb 10 06:20:23 localhost blivet[3177]: Detected 17.75 GiB of memory anaconda/journal.log:Feb 10 06:20:23 localhost blivet[3177]: Detected 17.75 GiB of memory anaconda/syslog:06:20:03,624 DEBUG kernel:Base memory trampoline at [(____ptrval____)] 99000 size 24576 anaconda/syslog:06:20:03,624 INFO kernel:Early memory node ranges anaconda/syslog:06:20:03,624 INFO kernel:PM: Registered nosave memory: [mem 0x00000000-0x00000fff] anaconda/syslog:06:20:03,624 INFO kernel:PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] anaconda/syslog:06:20:03,624 INFO kernel:PM: Registered nosave memory: [mem 0x000a0000-0x000effff] anaconda/syslog:06:20:03,624 INFO kernel:PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] anaconda/syslog:06:20:03,624 INFO kernel:PM: Registered nosave memory: [mem 0x7ffde000-0x7fffffff] anaconda/syslog:06:20:03,624 INFO kernel:PM: Registered nosave memory: [mem 0x80000000-0xafffffff] anaconda/syslog:06:20:03,624 INFO kernel:PM: Registered nosave memory: [mem 0xb0000000-0xbfffffff] anaconda/syslog:06:20:03,624 INFO kernel:PM: Registered nosave memory: [mem 0xc0000000-0xfed1bfff] anaconda/syslog:06:20:03,624 INFO kernel:PM: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff] anaconda/syslog:06:20:03,625 INFO kernel:PM: Registered nosave memory: [mem 0xfed20000-0xfeffbfff] anaconda/syslog:06:20:03,625 INFO kernel:PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] anaconda/syslog:06:20:03,625 INFO kernel:PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] anaconda/syslog:06:20:03,625 INFO kernel:PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] anaconda/syslog:06:20:03,625 INFO kernel:Memory: 2093848K/18873840K available (12292K kernel code, 1354K rwdata, 3652K rodata, 2120K init, 5364K bss, 461472K reserved, 0K cma-reserved) anaconda/syslog:06:20:03,625 INFO kernel:Freeing SMP alternatives memory: 28K anaconda/syslog:06:20:03,633 INFO kernel:Freeing initrd memory: 68228K anaconda/syslog:06:20:03,634 INFO kernel:Non-volatile memory driver v1.3 anaconda/syslog:06:20:03,636 INFO kernel:Freeing unused decrypted memory: 2040K anaconda/syslog:06:20:03,636 INFO kernel:Freeing unused kernel memory: 2120K anaconda/syslog:06:20:03,636 INFO kernel:Freeing unused kernel memory: 2020K anaconda/syslog:06:20:03,636 INFO kernel:Freeing unused kernel memory: 444K anaconda/storage.log:06:20:23,162 INFO blivet: Detected 17.75 GiB of memory anaconda/storage.log:06:20:23,176 INFO blivet: Detected 17.75 GiB of memory anaconda/anaconda.log:06:20:20,178 INFO anaconda: check_memory(): total:18176, needed:320, graphical:410 anaconda/anaconda.log:06:20:22,583 INFO anaconda: check_memory(): total:18176, needed:320, graphical:410 barbican/barbican-api.log:2021-02-10 08:54:10.730 110633 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option.: error: [Errno 11] Resource temporarily unavailable barbican/barbican-api.log:2021-02-10 08:54:20.635 110632 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option.: error: [Errno 11] Resource temporarily unavailable bash.log:2021-02-14T09:28:05.000 controller-0 bash: info HISTORY: PID=2553500 UID=0 grep -ir memory * ceph/ceph-mon.controller.log:2021-02-14 07:37:35.761 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613288255762191, "job": 3263, "event": "flush_started", "num_memtables": 1, "num_entries": 1045, "num_deletes": 252, "memory_usage": 963312, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 07:41:23.195 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613288483196413, "job": 3265, "event": "flush_started", "num_memtables": 1, "num_entries": 1254, "num_deletes": 251, "memory_usage": 1299128, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 07:45:55.824 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613288755825719, "job": 3267, "event": "flush_started", "num_memtables": 1, "num_entries": 1439, "num_deletes": 250, "memory_usage": 1515264, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 07:48:17.057 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613288897058556, "job": 3269, "event": "flush_started", "num_memtables": 1, "num_entries": 857, "num_deletes": 251, "memory_usage": 712232, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 07:49:50.868 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613288990869385, "job": 3271, "event": "flush_started", "num_memtables": 1, "num_entries": 891, "num_deletes": 500, "memory_usage": 472440, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 07:54:15.909 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613289255910375, "job": 3273, "event": "flush_started", "num_memtables": 1, "num_entries": 1409, "num_deletes": 250, "memory_usage": 1425664, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 07:55:07.210 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613289307211423, "job": 3275, "event": "flush_started", "num_memtables": 1, "num_entries": 494, "num_deletes": 251, "memory_usage": 364992, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:02:08.404 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613289728404884, "job": 3277, "event": "flush_started", "num_memtables": 1, "num_entries": 2046, "num_deletes": 251, "memory_usage": 2135512, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:02:35.961 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613289755963467, "job": 3279, "event": "flush_started", "num_memtables": 1, "num_entries": 377, "num_deletes": 250, "memory_usage": 172976, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:08:57.522 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613290137523661, "job": 3281, "event": "flush_started", "num_memtables": 1, "num_entries": 1921, "num_deletes": 251, "memory_usage": 2127024, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:10:56.011 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613290256012532, "job": 3283, "event": "flush_started", "num_memtables": 1, "num_entries": 732, "num_deletes": 250, "memory_usage": 478624, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:15:59.689 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613290559690969, "job": 3285, "event": "flush_started", "num_memtables": 1, "num_entries": 1560, "num_deletes": 251, "memory_usage": 1594384, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:19:16.064 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613290756065120, "job": 3287, "event": "flush_started", "num_memtables": 1, "num_entries": 1117, "num_deletes": 250, "memory_usage": 1146344, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:22:53.859 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613290973860378, "job": 3289, "event": "flush_started", "num_memtables": 1, "num_entries": 1183, "num_deletes": 251, "memory_usage": 1121856, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:27:36.114 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613291256115503, "job": 3291, "event": "flush_started", "num_memtables": 1, "num_entries": 1498, "num_deletes": 250, "memory_usage": 1648656, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:29:46.002 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613291386003231, "job": 3293, "event": "flush_started", "num_memtables": 1, "num_entries": 799, "num_deletes": 251, "memory_usage": 614504, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:30:21.151 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613291421152136, "job": 3295, "event": "flush_started", "num_memtables": 1, "num_entries": 688, "num_deletes": 500, "memory_usage": 405880, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:35:56.182 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613291756183200, "job": 3297, "event": "flush_started", "num_memtables": 1, "num_entries": 1671, "num_deletes": 250, "memory_usage": 1627800, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:36:40.157 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613291800158300, "job": 3299, "event": "flush_started", "num_memtables": 1, "num_entries": 434, "num_deletes": 251, "memory_usage": 193880, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:43:29.960 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613292209960885, "job": 3301, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "memory_usage": 2360440, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:44:21.298 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613292261298779, "job": 3303, "event": "flush_started", "num_memtables": 1, "num_entries": 487, "num_deletes": 252, "memory_usage": 322096, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:50:25.489 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613292625490006, "job": 3305, "event": "flush_started", "num_memtables": 1, "num_entries": 1810, "num_deletes": 251, "memory_usage": 1868632, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:52:41.357 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613292761358582, "job": 3307, "event": "flush_started", "num_memtables": 1, "num_entries": 866, "num_deletes": 250, "memory_usage": 827152, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 08:57:14.652 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613293034653170, "job": 3309, "event": "flush_started", "num_memtables": 1, "num_entries": 1430, "num_deletes": 251, "memory_usage": 1396600, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 09:01:01.415 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613293261416350, "job": 3311, "event": "flush_started", "num_memtables": 1, "num_entries": 1270, "num_deletes": 250, "memory_usage": 1417016, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 09:04:04.788 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613293444789140, "job": 3313, "event": "flush_started", "num_memtables": 1, "num_entries": 1028, "num_deletes": 251, "memory_usage": 884464, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 09:09:21.470 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613293761471268, "job": 3315, "event": "flush_started", "num_memtables": 1, "num_entries": 1601, "num_deletes": 250, "memory_usage": 1605488, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 09:10:56.953 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613293856954297, "job": 3317, "event": "flush_started", "num_memtables": 1, "num_entries": 696, "num_deletes": 251, "memory_usage": 659960, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 09:11:26.506 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613293886507226, "job": 3319, "event": "flush_started", "num_memtables": 1, "num_entries": 611, "num_deletes": 500, "memory_usage": 111816, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 09:17:41.549 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613294261550187, "job": 3321, "event": "flush_started", "num_memtables": 1, "num_entries": 1891, "num_deletes": 250, "memory_usage": 2080776, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 09:17:47.109 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613294267110023, "job": 3323, "event": "flush_started", "num_memtables": 1, "num_entries": 293, "num_deletes": 251, "memory_usage": 109832, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 09:24:47.264 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613294687265556, "job": 3325, "event": "flush_started", "num_memtables": 1, "num_entries": 2046, "num_deletes": 251, "memory_usage": 2130984, "flush_reason": "Other Reasons"} ceph/ceph-mon.controller.log:2021-02-14 09:26:01.599 7f6f7faff700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1613294761601625, "job": 3327, "event": "flush_started", "num_memtables": 1, "num_entries": 594, "num_deletes": 250, "memory_usage": 515040, "flush_reason": "Other Reasons"} daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.1, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.1 MiB, cgroup-rss: 5329.7 MiB, Avail: 11818.8 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.1 MiB, Avail: 12159.3 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T07:15:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:15:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.4 MiB (Base: 4686.1, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:15:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.9 MiB, cgroup-rss: 5331.8 MiB, Avail: 11816.3 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T07:15:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.9 MiB, Avail: 12156.9 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T07:15:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.3 MiB (Base: 4685.7, k8s-system: 590.6), k8s-addon: 0.0 daemon.log:2021-02-14T07:15:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.0 MiB, cgroup-rss: 5331.8 MiB, Avail: 11817.8 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T07:15:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.0 MiB, Avail: 12158.4 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T07:15:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:16:25.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T07:16:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:16:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5278.2 MiB (Base: 4688.1, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:16:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5045.6 MiB, cgroup-rss: 5333.7 MiB, Avail: 11814.4 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T07:16:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5045.6 MiB, Avail: 12155.0 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T07:16:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:16:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.0, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:16:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.0 MiB, cgroup-rss: 5329.7 MiB, Avail: 11819.6 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T07:16:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.0 MiB, Avail: 12160.2 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T07:17:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.7 MiB (Base: 4685.3, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:17:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.8 MiB, cgroup-rss: 5331.2 MiB, Avail: 11816.9 MiB, Total: 16859.6 MiB daemon.log:2021-02-14T07:17:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.8 MiB, Avail: 12157.6 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T07:17:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:17:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.2 MiB (Base: 4683.8, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:17:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.9 MiB, cgroup-rss: 5329.7 MiB, Avail: 11820.0 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T07:17:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.9 MiB, Avail: 12160.6 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T07:17:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:18:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:18:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.1 MiB (Base: 4684.7, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:18:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.4 MiB, cgroup-rss: 5330.6 MiB, Avail: 11818.0 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:18:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.4 MiB, Avail: 12158.6 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T07:18:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:18:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.1 MiB (Base: 4683.5, k8s-system: 590.6), k8s-addon: 0.0 daemon.log:2021-02-14T07:18:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.6 MiB, cgroup-rss: 5329.6 MiB, Avail: 11819.8 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:18:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.6 MiB, Avail: 12160.4 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T07:19:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:19:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.2, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:19:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.9 MiB, cgroup-rss: 5330.2 MiB, Avail: 11818.1 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T07:19:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.2 MiB, Avail: 12157.7 MiB, Total: 17199.8 MiB daemon.log:2021-02-14T07:19:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:19:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.5 MiB (Base: 4683.1, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:19:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.3 MiB, cgroup-rss: 5329.0 MiB, Avail: 11820.0 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T07:19:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.3 MiB, Avail: 12160.6 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T07:20:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:20:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.1 MiB (Base: 4683.8, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:20:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.6 MiB, cgroup-rss: 5329.5 MiB, Avail: 11817.8 MiB, Total: 16859.4 MiB daemon.log:2021-02-14T07:20:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.6 MiB, Avail: 12158.7 MiB, Total: 17200.3 MiB daemon.log:2021-02-14T07:20:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:20:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.9 MiB (Base: 4686.4, k8s-system: 590.5), k8s-addon: 0.0 daemon.log:2021-02-14T07:20:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.7 MiB, cgroup-rss: 5332.4 MiB, Avail: 11816.8 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:20:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.7 MiB, Avail: 12157.4 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T07:21:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:21:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.8 MiB (Base: 4685.6, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:21:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.9 MiB, cgroup-rss: 5331.3 MiB, Avail: 11816.9 MiB, Total: 16859.8 MiB daemon.log:2021-02-14T07:21:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.9 MiB, Avail: 12157.5 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T07:21:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:21:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.4 MiB (Base: 4685.1, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:21:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.2 MiB, cgroup-rss: 5330.9 MiB, Avail: 11818.1 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T07:21:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.2 MiB, Avail: 12158.7 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T07:22:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:22:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.2 MiB (Base: 4684.8, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:22:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.0 MiB, cgroup-rss: 5330.7 MiB, Avail: 11817.2 MiB, Total: 16859.2 MiB daemon.log:2021-02-14T07:22:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.0 MiB, Avail: 12157.7 MiB, Total: 17199.8 MiB daemon.log:2021-02-14T07:22:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:22:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.3 MiB (Base: 4685.2, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:22:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.9 MiB, cgroup-rss: 5330.8 MiB, Avail: 11818.5 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:22:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.9 MiB, Avail: 12159.1 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T07:23:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:23:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.0 MiB (Base: 4684.6, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:23:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.1 MiB, cgroup-rss: 5329.7 MiB, Avail: 11818.6 MiB, Total: 16859.7 MiB daemon.log:2021-02-14T07:23:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.1 MiB, Avail: 12159.0 MiB, Total: 17200.1 MiB daemon.log:2021-02-14T07:23:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:23:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5271.9 MiB (Base: 4681.7, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:23:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.4 MiB, cgroup-rss: 5327.4 MiB, Avail: 11821.3 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T07:23:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.4 MiB, Avail: 12161.9 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T07:24:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:24:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.9 MiB (Base: 4682.7, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:24:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.9 MiB, cgroup-rss: 5329.1 MiB, Avail: 11816.9 MiB, Total: 16859.8 MiB daemon.log:2021-02-14T07:24:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.5 MiB, Avail: 12157.5 MiB, Total: 17199.1 MiB daemon.log:2021-02-14T07:24:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:24:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.3 MiB (Base: 4681.9, k8s-system: 590.5), k8s-addon: 0.0 daemon.log:2021-02-14T07:24:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.6 MiB, cgroup-rss: 5327.8 MiB, Avail: 11820.8 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:24:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.6 MiB, Avail: 12161.5 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T07:25:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.6 MiB (Base: 4682.3, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:25:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:25:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.7 MiB, cgroup-rss: 5328.0 MiB, Avail: 11819.8 MiB, Total: 16859.5 MiB daemon.log:2021-02-14T07:25:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.7 MiB, Avail: 12160.4 MiB, Total: 17200.2 MiB daemon.log:2021-02-14T07:25:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:25:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.1 MiB (Base: 4682.6, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:25:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.6 MiB, cgroup-rss: 5328.5 MiB, Avail: 11820.4 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T07:25:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.6 MiB, Avail: 12161.1 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T07:26:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:26:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.8 MiB (Base: 4684.2, k8s-system: 590.6), k8s-addon: 0.0 daemon.log:2021-02-14T07:26:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.1 MiB, cgroup-rss: 5330.2 MiB, Avail: 11818.1 MiB, Total: 16859.2 MiB daemon.log:2021-02-14T07:26:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.1 MiB, Avail: 12158.7 MiB, Total: 17199.8 MiB daemon.log:2021-02-14T07:26:55.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T07:26:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:26:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.3 MiB (Base: 4682.1, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:26:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.3 MiB, cgroup-rss: 5327.7 MiB, Avail: 11820.3 MiB, Total: 16859.6 MiB daemon.log:2021-02-14T07:26:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.3 MiB, Avail: 12161.0 MiB, Total: 17200.3 MiB daemon.log:2021-02-14T07:27:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:27:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.9 MiB (Base: 4686.6, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:27:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.5 MiB, cgroup-rss: 5332.4 MiB, Avail: 11815.5 MiB, Total: 16859.0 MiB daemon.log:2021-02-14T07:27:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.5 MiB, Avail: 12156.2 MiB, Total: 17199.7 MiB daemon.log:2021-02-14T07:27:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:27:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.4 MiB (Base: 4683.0, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:27:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.3 MiB, cgroup-rss: 5328.9 MiB, Avail: 11820.3 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T07:27:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.3 MiB, Avail: 12161.0 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T07:28:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:28:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.0 MiB (Base: 4683.9, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:28:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.2 MiB, cgroup-rss: 5329.4 MiB, Avail: 11818.2 MiB, Total: 16859.4 MiB daemon.log:2021-02-14T07:28:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.2 MiB, Avail: 12158.9 MiB, Total: 17200.1 MiB daemon.log:2021-02-14T07:28:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:28:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.7 MiB (Base: 4682.3, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:28:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.3 MiB, cgroup-rss: 5328.1 MiB, Avail: 11821.1 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:28:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.4 MiB, Avail: 12161.8 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T07:29:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5280.9 MiB (Base: 4690.5, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:29:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:29:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.4 MiB, cgroup-rss: 5333.6 MiB, Avail: 11815.9 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:29:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5044.4 MiB, Avail: 12156.0 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T07:29:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:29:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.9 MiB (Base: 4682.8, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:29:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.3 MiB, cgroup-rss: 5328.3 MiB, Avail: 11821.5 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T07:29:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.3 MiB, Avail: 12162.2 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T07:30:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.2, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:30:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:30:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.2 MiB, cgroup-rss: 5329.6 MiB, Avail: 11818.8 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T07:30:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.2 MiB, Avail: 12159.5 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T07:30:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:30:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.4 MiB (Base: 4683.0, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:30:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.4 MiB, cgroup-rss: 5328.9 MiB, Avail: 11820.0 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T07:30:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.5 MiB, Avail: 12160.7 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T07:31:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:31:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.7 MiB (Base: 4685.6, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:31:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.0 MiB, cgroup-rss: 5331.2 MiB, Avail: 11817.5 MiB, Total: 16859.5 MiB daemon.log:2021-02-14T07:31:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.0 MiB, Avail: 12158.2 MiB, Total: 17200.2 MiB daemon.log:2021-02-14T07:31:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:31:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.6 MiB (Base: 4684.5, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:31:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.5 MiB, cgroup-rss: 5330.1 MiB, Avail: 11819.2 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T07:31:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.5 MiB, Avail: 12159.9 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T07:32:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.5 MiB (Base: 4686.1, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:32:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.8 MiB, cgroup-rss: 5331.9 MiB, Avail: 11817.1 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T07:32:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.8 MiB, Avail: 12157.7 MiB, Total: 17200.5 MiB daemon.log:2021-02-14T07:32:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:32:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:32:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.0 MiB (Base: 4682.6, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:32:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.6 MiB, cgroup-rss: 5328.4 MiB, Avail: 11821.1 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T07:32:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.6 MiB, Avail: 12161.9 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T07:33:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:33:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.6 MiB (Base: 4685.5, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:33:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.1 MiB, cgroup-rss: 5331.4 MiB, Avail: 11816.9 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T07:33:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.9 MiB, Avail: 12157.4 MiB, Total: 17200.3 MiB daemon.log:2021-02-14T07:33:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:33:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5278.0 MiB (Base: 4687.6, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:33:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.3 MiB, cgroup-rss: 5333.5 MiB, Avail: 11816.1 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:33:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5044.3 MiB, Avail: 12156.8 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T07:34:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5279.0 MiB (Base: 4688.7, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:34:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:34:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5045.3 MiB, cgroup-rss: 5335.1 MiB, Avail: 11814.0 MiB, Total: 16859.4 MiB daemon.log:2021-02-14T07:34:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5047.1 MiB, Avail: 12152.7 MiB, Total: 17199.9 MiB daemon.log:2021-02-14T07:34:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:34:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.8 MiB (Base: 4682.6, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:34:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.4 MiB, cgroup-rss: 5328.3 MiB, Avail: 11821.3 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T07:34:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.4 MiB, Avail: 12161.9 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T07:35:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:35:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.2 MiB (Base: 4683.1, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:35:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.5 MiB, cgroup-rss: 5328.7 MiB, Avail: 11820.9 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:35:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.5 MiB, Avail: 12161.5 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T07:35:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:35:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.8 MiB (Base: 4687.0, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T07:35:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.5 MiB, cgroup-rss: 5332.2 MiB, Avail: 11817.3 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T07:35:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.5 MiB, Avail: 12157.9 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T07:36:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:36:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5277.3 MiB (Base: 4687.4, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T07:36:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.9 MiB, cgroup-rss: 5332.8 MiB, Avail: 11817.2 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T07:36:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.3 MiB, Avail: 12157.7 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T07:36:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:36:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.7 MiB (Base: 4682.5, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:36:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.1 MiB, cgroup-rss: 5328.2 MiB, Avail: 11821.9 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T07:36:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5039.1 MiB, Avail: 12162.5 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T07:37:25.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T07:37:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:37:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.1 MiB (Base: 4683.9, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:37:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.4 MiB, cgroup-rss: 5329.5 MiB, Avail: 11820.6 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T07:37:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.3 MiB, Avail: 12161.2 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T07:37:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:37:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.1 MiB (Base: 4682.9, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:37:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.1 MiB, cgroup-rss: 5328.6 MiB, Avail: 11821.8 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T07:37:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5039.1 MiB, Avail: 12162.4 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T07:38:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:38:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.0 MiB (Base: 4683.6, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:38:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5038.9 MiB, cgroup-rss: 5329.3 MiB, Avail: 11820.7 MiB, Total: 16859.6 MiB daemon.log:2021-02-14T07:38:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5038.9 MiB, Avail: 12161.0 MiB, Total: 17199.9 MiB daemon.log:2021-02-14T07:38:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:38:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.0 MiB (Base: 4682.8, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:38:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.7 MiB, cgroup-rss: 5328.5 MiB, Avail: 11821.2 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T07:38:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.7 MiB, Avail: 12161.7 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T07:39:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.0 MiB (Base: 4682.9, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:39:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.2 MiB, cgroup-rss: 5328.5 MiB, Avail: 11821.6 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T07:39:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:39:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.2 MiB, Avail: 12162.2 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T07:39:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:39:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.7 MiB (Base: 4683.5, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:39:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.0 MiB, cgroup-rss: 5329.1 MiB, Avail: 11821.1 MiB, Total: 16861.1 MiB daemon.log:2021-02-14T07:39:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.0 MiB, Avail: 12161.7 MiB, Total: 17201.7 MiB daemon.log:2021-02-14T07:40:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:40:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.0 MiB (Base: 4683.8, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:40:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.2 MiB, cgroup-rss: 5329.4 MiB, Avail: 11820.0 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T07:40:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.2 MiB, Avail: 12160.8 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T07:40:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:40:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.9 MiB (Base: 4685.7, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:40:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.1 MiB, cgroup-rss: 5331.4 MiB, Avail: 11819.0 MiB, Total: 16861.1 MiB daemon.log:2021-02-14T07:40:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.1 MiB, Avail: 12159.6 MiB, Total: 17201.7 MiB daemon.log:2021-02-14T07:41:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.0 MiB (Base: 4685.5, k8s-system: 590.5), k8s-addon: 0.0 daemon.log:2021-02-14T07:41:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:41:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.4 MiB, cgroup-rss: 5331.1 MiB, Avail: 11819.5 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T07:41:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5040.8 MiB, Avail: 12159.6 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T07:41:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:41:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.9 MiB (Base: 4682.9, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T07:41:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.1 MiB, cgroup-rss: 5328.3 MiB, Avail: 11821.6 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T07:41:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5039.1 MiB, Avail: 12162.2 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T07:42:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:42:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.1 MiB (Base: 4683.0, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:42:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.8 MiB, cgroup-rss: 5328.6 MiB, Avail: 11819.8 MiB, Total: 16859.6 MiB daemon.log:2021-02-14T07:42:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.8 MiB, Avail: 12160.6 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T07:42:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:42:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.8 MiB (Base: 4684.4, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:42:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.3 MiB, cgroup-rss: 5330.2 MiB, Avail: 11819.4 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T07:42:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.3 MiB, Avail: 12160.0 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T07:43:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:43:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.1 MiB (Base: 4685.8, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:43:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.4 MiB, cgroup-rss: 5329.1 MiB, Avail: 11820.6 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T07:43:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.6 MiB, Avail: 12161.3 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T07:43:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:43:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.8 MiB (Base: 4683.7, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:43:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.4 MiB, cgroup-rss: 5329.3 MiB, Avail: 11820.4 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T07:43:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.4 MiB, Avail: 12161.0 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T07:44:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.4 MiB (Base: 4683.9, k8s-system: 590.5), k8s-addon: 0.0 daemon.log:2021-02-14T07:44:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:44:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.4 MiB, cgroup-rss: 5329.6 MiB, Avail: 11818.1 MiB, Total: 16859.5 MiB daemon.log:2021-02-14T07:44:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.2 MiB, Avail: 12159.3 MiB, Total: 17200.5 MiB daemon.log:2021-02-14T07:44:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:44:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.0 MiB (Base: 4685.8, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:44:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.1 MiB, cgroup-rss: 5331.4 MiB, Avail: 11817.6 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T07:44:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.1 MiB, Avail: 12158.2 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T07:45:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:45:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.1 MiB (Base: 4685.0, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:45:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.6 MiB, cgroup-rss: 5330.6 MiB, Avail: 11817.7 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T07:45:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.4 MiB, Avail: 12158.4 MiB, Total: 17200.8 MiB daemon.log:2021-02-14T07:45:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:45:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5277.8 MiB (Base: 4687.5, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:45:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5045.0 MiB, cgroup-rss: 5333.2 MiB, Avail: 11816.0 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T07:45:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5045.0 MiB, Avail: 12156.6 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T07:46:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5278.4 MiB (Base: 4688.0, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:46:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.9 MiB, cgroup-rss: 5333.8 MiB, Avail: 11814.9 MiB, Total: 16859.8 MiB daemon.log:2021-02-14T07:46:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5044.9 MiB, Avail: 12155.8 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T07:46:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:46:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:46:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.6 MiB (Base: 4685.5, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:46:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.9 MiB, cgroup-rss: 5331.1 MiB, Avail: 11818.3 MiB, Total: 16861.2 MiB daemon.log:2021-02-14T07:46:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.9 MiB, Avail: 12158.9 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T07:47:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.2 MiB (Base: 4683.8, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:47:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.4 MiB, cgroup-rss: 5329.6 MiB, Avail: 11820.3 MiB, Total: 16861.6 MiB daemon.log:2021-02-14T07:47:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.4 MiB, Avail: 12160.9 MiB, Total: 17202.3 MiB daemon.log:2021-02-14T07:47:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:47:55.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T07:47:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:47:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.8 MiB (Base: 4686.5, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:47:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.3 MiB, cgroup-rss: 5332.2 MiB, Avail: 11816.4 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T07:47:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5044.3 MiB, Avail: 12157.8 MiB, Total: 17202.0 MiB daemon.log:2021-02-14T07:48:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.2 MiB (Base: 4685.0, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:48:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.4 MiB, cgroup-rss: 5329.9 MiB, Avail: 11818.0 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T07:48:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.8 MiB, Avail: 12159.3 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T07:48:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:48:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5278.8 MiB (Base: 4688.6, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:48:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5046.2 MiB, cgroup-rss: 5334.3 MiB, Avail: 11815.0 MiB, Total: 16861.2 MiB daemon.log:2021-02-14T07:48:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5046.2 MiB, Avail: 12155.7 MiB, Total: 17201.9 MiB daemon.log:2021-02-14T07:48:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:49:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:49:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.2 MiB (Base: 4684.1, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:49:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.3 MiB, cgroup-rss: 5330.6 MiB, Avail: 11819.3 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T07:49:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.3 MiB, Avail: 12160.0 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T07:49:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:49:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.3 MiB (Base: 4686.1, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:49:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.8 MiB, cgroup-rss: 5331.8 MiB, Avail: 11818.4 MiB, Total: 16861.2 MiB daemon.log:2021-02-14T07:49:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.8 MiB, Avail: 12159.0 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T07:50:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5278.2 MiB (Base: 4687.8, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:50:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:50:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.0 MiB, cgroup-rss: 5335.8 MiB, Avail: 11816.5 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T07:50:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5046.7 MiB, Avail: 12153.5 MiB, Total: 17200.2 MiB daemon.log:2021-02-14T07:50:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.1 MiB (Base: 4683.8, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:50:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.2 MiB, cgroup-rss: 5329.6 MiB, Avail: 11820.8 MiB, Total: 16862.0 MiB daemon.log:2021-02-14T07:50:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:50:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5041.2 MiB, Avail: 12161.4 MiB, Total: 17202.6 MiB daemon.log:2021-02-14T07:51:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:51:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5270.8 MiB (Base: 4680.6, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:51:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5038.9 MiB, cgroup-rss: 5326.3 MiB, Avail: 11822.1 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T07:51:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5038.2 MiB, Avail: 12163.8 MiB, Total: 17202.0 MiB daemon.log:2021-02-14T07:51:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:51:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.9 MiB (Base: 4686.5, k8s-system: 590.5), k8s-addon: 0.0 daemon.log:2021-02-14T07:51:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.8 MiB, cgroup-rss: 5332.3 MiB, Avail: 11817.6 MiB, Total: 16861.3 MiB daemon.log:2021-02-14T07:51:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.8 MiB, Avail: 12158.4 MiB, Total: 17202.1 MiB daemon.log:2021-02-14T07:52:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:52:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5271.4 MiB (Base: 4681.2, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:52:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.0 MiB, cgroup-rss: 5326.8 MiB, Avail: 11823.0 MiB, Total: 16862.0 MiB daemon.log:2021-02-14T07:52:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5039.0 MiB, Avail: 12163.6 MiB, Total: 17202.6 MiB daemon.log:2021-02-14T07:52:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:52:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.3 MiB (Base: 4683.4, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T07:52:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.8 MiB, cgroup-rss: 5328.6 MiB, Avail: 11820.6 MiB, Total: 16861.4 MiB daemon.log:2021-02-14T07:52:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.8 MiB, Avail: 12161.2 MiB, Total: 17202.0 MiB daemon.log:2021-02-14T07:53:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:53:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.6 MiB (Base: 4682.5, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:53:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.0 MiB, cgroup-rss: 5327.9 MiB, Avail: 11821.4 MiB, Total: 16861.5 MiB daemon.log:2021-02-14T07:53:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.0 MiB, Avail: 12162.0 MiB, Total: 17202.1 MiB daemon.log:2021-02-14T07:53:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:53:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5277.1 MiB (Base: 4687.2, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T07:53:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.4 MiB, cgroup-rss: 5332.4 MiB, Avail: 11816.4 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T07:53:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5044.4 MiB, Avail: 12157.0 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T07:54:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.1 MiB (Base: 4686.2, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T07:54:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:54:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.0 MiB, cgroup-rss: 5330.3 MiB, Avail: 11819.0 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T07:54:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.0 MiB, Avail: 12159.7 MiB, Total: 17201.7 MiB daemon.log:2021-02-14T07:54:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.3 MiB (Base: 4685.9, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:54:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.1 MiB, cgroup-rss: 5331.7 MiB, Avail: 11817.6 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T07:54:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:54:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.1 MiB, Avail: 12158.2 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T07:55:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:55:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.4 MiB (Base: 4685.0, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:55:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.3 MiB, cgroup-rss: 5330.8 MiB, Avail: 11818.8 MiB, Total: 16861.1 MiB daemon.log:2021-02-14T07:55:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.9 MiB, Avail: 12159.6 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T07:55:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:55:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.1 MiB (Base: 4683.8, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:55:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.2 MiB, cgroup-rss: 5329.5 MiB, Avail: 11819.7 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T07:55:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.2 MiB, Avail: 12160.3 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T07:56:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:56:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.4 MiB (Base: 4685.2, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:56:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.7 MiB, cgroup-rss: 5330.8 MiB, Avail: 11818.3 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T07:56:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.7 MiB, Avail: 12159.0 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T07:56:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:56:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.4 MiB (Base: 4683.1, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:56:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.2 MiB, cgroup-rss: 5328.8 MiB, Avail: 11821.0 MiB, Total: 16861.2 MiB daemon.log:2021-02-14T07:56:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.2 MiB, Avail: 12161.7 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T07:57:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:57:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.7 MiB (Base: 4683.5, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T07:57:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.7 MiB, cgroup-rss: 5329.1 MiB, Avail: 11820.7 MiB, Total: 16861.4 MiB daemon.log:2021-02-14T07:57:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.7 MiB, Avail: 12161.5 MiB, Total: 17202.2 MiB daemon.log:2021-02-14T07:57:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:57:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.1 MiB (Base: 4682.7, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T07:57:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.6 MiB, cgroup-rss: 5328.5 MiB, Avail: 11821.6 MiB, Total: 16861.2 MiB daemon.log:2021-02-14T07:57:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.6 MiB, Avail: 12162.2 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T07:58:25.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T07:58:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:58:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.9 MiB (Base: 4682.9, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T07:58:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.6 MiB, cgroup-rss: 5328.3 MiB, Avail: 11821.9 MiB, Total: 16861.5 MiB daemon.log:2021-02-14T07:58:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.6 MiB, Avail: 12162.5 MiB, Total: 17202.1 MiB daemon.log:2021-02-14T07:58:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:58:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.3 MiB (Base: 4681.9, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T07:58:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.2 MiB, cgroup-rss: 5327.7 MiB, Avail: 11822.5 MiB, Total: 16861.7 MiB daemon.log:2021-02-14T07:58:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5039.0 MiB, Avail: 12163.5 MiB, Total: 17202.6 MiB daemon.log:2021-02-14T07:59:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:59:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5277.0 MiB (Base: 4686.8, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:59:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.2 MiB, cgroup-rss: 5333.4 MiB, Avail: 11818.5 MiB, Total: 16861.7 MiB daemon.log:2021-02-14T07:59:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.2 MiB, Avail: 12159.2 MiB, Total: 17202.4 MiB daemon.log:2021-02-14T07:59:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5271.6 MiB (Base: 4681.3, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T07:59:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5038.8 MiB, cgroup-rss: 5327.0 MiB, Avail: 11823.1 MiB, Total: 16861.9 MiB daemon.log:2021-02-14T07:59:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T07:59:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5038.8 MiB, Avail: 12163.8 MiB, Total: 17202.6 MiB daemon.log:2021-02-14T08:00:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:00:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.3 MiB (Base: 4682.5, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:00:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.5 MiB, cgroup-rss: 5327.7 MiB, Avail: 11821.6 MiB, Total: 16861.1 MiB daemon.log:2021-02-14T08:00:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.5 MiB, Avail: 12162.3 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T08:00:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:00:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.0 MiB (Base: 4681.7, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:00:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5038.8 MiB, cgroup-rss: 5327.5 MiB, Avail: 11822.3 MiB, Total: 16861.1 MiB daemon.log:2021-02-14T08:00:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5038.8 MiB, Avail: 12163.0 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T08:01:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:01:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.4 MiB (Base: 4683.2, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:01:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.8 MiB, cgroup-rss: 5328.8 MiB, Avail: 11821.2 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T08:01:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.8 MiB, Avail: 12161.9 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T08:01:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:01:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.9 MiB (Base: 4684.9, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:01:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.2 MiB, cgroup-rss: 5330.4 MiB, Avail: 11819.4 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:01:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.2 MiB, Avail: 12160.1 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:02:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:02:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.2 MiB (Base: 4682.8, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:02:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.0 MiB, cgroup-rss: 5328.6 MiB, Avail: 11821.4 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T08:02:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5039.0 MiB, Avail: 12162.2 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:02:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.0 MiB (Base: 4682.7, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:02:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.1 MiB, cgroup-rss: 5328.4 MiB, Avail: 11821.3 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T08:02:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.1 MiB, Avail: 12162.0 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:02:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:03:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:03:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.2, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:03:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.3 MiB, cgroup-rss: 5329.7 MiB, Avail: 11820.2 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T08:03:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.3 MiB, Avail: 12160.9 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:03:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:03:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.3 MiB (Base: 4686.1, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:03:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.6 MiB, cgroup-rss: 5331.8 MiB, Avail: 11818.0 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T08:03:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5042.6 MiB, Avail: 12158.7 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:04:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:04:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.2 MiB (Base: 4683.9, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:04:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.4 MiB, cgroup-rss: 5329.6 MiB, Avail: 11820.5 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T08:04:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.2 MiB, Avail: 12160.4 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T08:04:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.9 MiB (Base: 4682.7, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:04:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:04:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.3 MiB, cgroup-rss: 5328.4 MiB, Avail: 11822.2 MiB, Total: 16861.5 MiB daemon.log:2021-02-14T08:04:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.7 MiB, Avail: 12162.5 MiB, Total: 17202.2 MiB daemon.log:2021-02-14T08:05:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:05:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.5 MiB (Base: 4684.1, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:05:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.9 MiB, cgroup-rss: 5330.0 MiB, Avail: 11820.6 MiB, Total: 16861.5 MiB daemon.log:2021-02-14T08:05:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.7 MiB, Avail: 12161.4 MiB, Total: 17202.1 MiB daemon.log:2021-02-14T08:05:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:05:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.5 MiB (Base: 4686.0, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:05:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.1 MiB, cgroup-rss: 5331.9 MiB, Avail: 11817.6 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:05:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.1 MiB, Avail: 12158.3 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:06:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:06:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.5 MiB (Base: 4683.5, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:06:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.8 MiB, cgroup-rss: 5328.9 MiB, Avail: 11820.0 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T08:06:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.8 MiB, Avail: 12160.8 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T08:06:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:06:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.6 MiB (Base: 4682.4, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:06:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5039.5 MiB, cgroup-rss: 5328.1 MiB, Avail: 11821.3 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T08:06:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5039.5 MiB, Avail: 12162.0 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T08:07:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:07:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.3, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:07:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.3 MiB, cgroup-rss: 5329.8 MiB, Avail: 11819.3 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:07:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.3 MiB, Avail: 12160.0 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:07:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:07:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.6 MiB (Base: 4685.7, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:07:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.5 MiB, cgroup-rss: 5331.0 MiB, Avail: 11817.9 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T08:07:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.5 MiB, Avail: 12158.7 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:08:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:08:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5277.7 MiB (Base: 4687.3, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:08:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.2 MiB, cgroup-rss: 5333.1 MiB, Avail: 11816.5 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T08:08:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5044.2 MiB, Avail: 12157.3 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T08:08:55.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T08:08:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:08:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5272.0 MiB (Base: 4682.1, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:08:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5038.6 MiB, cgroup-rss: 5327.5 MiB, Avail: 11822.3 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:08:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.29%, Anon: 5038.6 MiB, Avail: 12163.0 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T08:09:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:09:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.9 MiB (Base: 4684.7, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:09:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.4 MiB, cgroup-rss: 5330.7 MiB, Avail: 11820.1 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T08:09:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.4 MiB, Avail: 12159.1 MiB, Total: 17200.5 MiB daemon.log:2021-02-14T08:09:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:09:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.5 MiB (Base: 4684.1, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:09:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.7 MiB, cgroup-rss: 5329.9 MiB, Avail: 11819.4 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:09:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.7 MiB, Avail: 12160.2 MiB, Total: 17200.8 MiB daemon.log:2021-02-14T08:10:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.2, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:10:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.0 MiB, cgroup-rss: 5329.7 MiB, Avail: 11819.6 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T08:10:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:10:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.0 MiB, Avail: 12160.3 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:10:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:10:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.5 MiB (Base: 4683.5, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:10:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.0 MiB, cgroup-rss: 5328.8 MiB, Avail: 11820.3 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T08:10:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.2 MiB, Avail: 12161.0 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:11:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:11:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.5 MiB (Base: 4684.3, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:11:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.2 MiB, cgroup-rss: 5329.7 MiB, Avail: 11819.4 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:11:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.1 MiB, Avail: 12160.3 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:11:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:11:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.6%; Reserved: 4600.0 MiB, Platform: 5273.8 MiB (Base: 4683.8, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:11:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.4 MiB, cgroup-rss: 5329.0 MiB, Avail: 11820.5 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:11:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.4 MiB, Avail: 12161.3 MiB, Total: 17201.7 MiB daemon.log:2021-02-14T08:12:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:12:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5277.7 MiB (Base: 4687.3, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:12:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.2 MiB, cgroup-rss: 5332.8 MiB, Avail: 11816.6 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T08:12:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5044.2 MiB, Avail: 12157.4 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T08:12:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:12:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.9 MiB (Base: 4686.5, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:12:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.1 MiB, cgroup-rss: 5332.3 MiB, Avail: 11817.3 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T08:12:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.1 MiB, Avail: 12158.2 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:13:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:13:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.3 MiB (Base: 4685.0, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:13:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.1 MiB, cgroup-rss: 5330.7 MiB, Avail: 11818.9 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T08:13:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.1 MiB, Avail: 12159.7 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T08:13:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.0 MiB (Base: 4684.2, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:13:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:13:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5040.6 MiB, cgroup-rss: 5329.4 MiB, Avail: 11820.3 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:13:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.30%, Anon: 5040.6 MiB, Avail: 12161.2 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T08:14:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5279.4 MiB (Base: 4689.1, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:14:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5045.6 MiB, cgroup-rss: 5336.5 MiB, Avail: 11814.4 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:14:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5046.4 MiB, Avail: 12154.1 MiB, Total: 17200.5 MiB daemon.log:2021-02-14T08:14:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:14:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5277.8 MiB (Base: 4687.6, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:14:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.7 MiB, cgroup-rss: 5333.3 MiB, Avail: 11815.8 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T08:14:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5044.7 MiB, Avail: 12156.6 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:14:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:15:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:15:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.3 MiB (Base: 4685.4, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:15:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.5 MiB, cgroup-rss: 5330.8 MiB, Avail: 11818.5 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T08:15:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.3 MiB, Avail: 12159.6 MiB, Total: 17201.9 MiB daemon.log:2021-02-14T08:15:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:15:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5276.3 MiB (Base: 4685.8, k8s-system: 590.5), k8s-addon: 0.0 daemon.log:2021-02-14T08:15:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5043.1 MiB, cgroup-rss: 5331.8 MiB, Avail: 11817.8 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:15:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.32%, Anon: 5043.1 MiB, Avail: 12158.6 MiB, Total: 17201.7 MiB daemon.log:2021-02-14T08:16:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5278.4 MiB (Base: 4688.2, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:16:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5045.2 MiB, cgroup-rss: 5333.9 MiB, Avail: 11815.8 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T08:16:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5045.2 MiB, Avail: 12156.6 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T08:16:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:16:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.4, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:16:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5041.7 MiB, cgroup-rss: 5329.8 MiB, Avail: 11819.0 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T08:16:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:16:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5041.7 MiB, Avail: 12159.9 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T08:17:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:17:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.5 MiB (Base: 4685.6, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:17:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.8 MiB, cgroup-rss: 5330.9 MiB, Avail: 11818.0 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:17:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.3 MiB, Avail: 12159.1 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T08:17:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:17:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5275.7 MiB (Base: 4685.4, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:17:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5042.3 MiB, cgroup-rss: 5331.1 MiB, Avail: 11817.6 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:17:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.31%, Anon: 5042.3 MiB, Avail: 12158.5 MiB, Total: 17200.8 MiB daemon.log:2021-02-14T08:18:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:18:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5279.0 MiB (Base: 4688.8, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:18:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5046.1 MiB, cgroup-rss: 5334.4 MiB, Avail: 11814.1 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:18:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5046.1 MiB, Avail: 12155.0 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:18:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5286.5 MiB (Base: 4696.4, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:18:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:18:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5053.6 MiB, cgroup-rss: 5342.0 MiB, Avail: 11807.2 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T08:18:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.6 MiB, Avail: 12148.1 MiB, Total: 17201.7 MiB daemon.log:2021-02-14T08:19:25.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T08:19:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.0%; Reserved: 4600.0 MiB, Platform: 5288.5 MiB (Base: 4698.1, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:19:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:19:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5057.0 MiB, cgroup-rss: 5343.8 MiB, Avail: 11803.6 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:19:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.39%, Anon: 5055.7 MiB, Avail: 12145.6 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:19:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:19:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5277.2 MiB (Base: 4687.1, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:19:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.4 MiB, cgroup-rss: 5332.7 MiB, Avail: 11816.4 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T08:19:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5044.4 MiB, Avail: 12157.2 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T08:20:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:20:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5279.6 MiB (Base: 4689.4, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:20:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5046.5 MiB, cgroup-rss: 5335.0 MiB, Avail: 11813.9 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T08:20:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5046.5 MiB, Avail: 12154.8 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:20:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:20:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.4 MiB (Base: 4691.3, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:20:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.0 MiB, cgroup-rss: 5336.9 MiB, Avail: 11812.2 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T08:20:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.0 MiB, Avail: 12153.0 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:21:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:21:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5279.7 MiB (Base: 4689.7, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:21:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5046.2 MiB, cgroup-rss: 5335.1 MiB, Avail: 11813.9 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:21:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5046.0 MiB, Avail: 12155.0 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:21:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5280.0 MiB (Base: 4689.9, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:21:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:21:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5046.8 MiB, cgroup-rss: 5335.6 MiB, Avail: 11813.5 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T08:21:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5046.8 MiB, Avail: 12154.4 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:22:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:22:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5280.1 MiB (Base: 4689.9, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:22:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5046.6 MiB, cgroup-rss: 5335.5 MiB, Avail: 11813.6 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T08:22:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5046.6 MiB, Avail: 12154.5 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:22:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:22:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5278.4 MiB (Base: 4688.2, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:22:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5044.8 MiB, cgroup-rss: 5333.8 MiB, Avail: 11814.8 MiB, Total: 16859.6 MiB daemon.log:2021-02-14T08:22:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5044.8 MiB, Avail: 12155.8 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T08:23:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:23:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5279.2 MiB (Base: 4689.1, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:23:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5045.6 MiB, cgroup-rss: 5334.6 MiB, Avail: 11814.1 MiB, Total: 16859.7 MiB daemon.log:2021-02-14T08:23:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5045.6 MiB, Avail: 12155.1 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T08:23:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5278.5 MiB (Base: 4688.4, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:23:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5045.5 MiB, cgroup-rss: 5334.0 MiB, Avail: 11814.0 MiB, Total: 16859.5 MiB daemon.log:2021-02-14T08:23:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5045.5 MiB, Avail: 12155.0 MiB, Total: 17200.5 MiB daemon.log:2021-02-14T08:23:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:24:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:24:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5278.7 MiB (Base: 4688.6, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:24:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5045.8 MiB, cgroup-rss: 5334.1 MiB, Avail: 11814.0 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:24:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5045.8 MiB, Avail: 12155.0 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T08:24:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:24:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.4 MiB (Base: 4691.4, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:24:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.3 MiB, cgroup-rss: 5336.9 MiB, Avail: 11811.8 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:24:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.3 MiB, Avail: 12152.7 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:25:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:25:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.3 MiB (Base: 4693.4, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:25:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.6 MiB, cgroup-rss: 5338.8 MiB, Avail: 11808.9 MiB, Total: 16859.5 MiB daemon.log:2021-02-14T08:25:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.1 MiB, Avail: 12150.5 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T08:25:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5277.8 MiB (Base: 4687.7, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:25:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5045.0 MiB, cgroup-rss: 5333.2 MiB, Avail: 11815.1 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:25:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.33%, Anon: 5045.0 MiB, Avail: 12156.1 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:25:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:26:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:26:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.3 MiB (Base: 4692.7, k8s-system: 590.6), k8s-addon: 0.0 daemon.log:2021-02-14T08:26:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5049.5 MiB, cgroup-rss: 5338.7 MiB, Avail: 11810.2 MiB, Total: 16859.7 MiB daemon.log:2021-02-14T08:26:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5049.5 MiB, Avail: 12151.2 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T08:26:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:26:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.3 MiB (Base: 4691.2, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:26:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.7 MiB, cgroup-rss: 5336.7 MiB, Avail: 11811.0 MiB, Total: 16859.7 MiB daemon.log:2021-02-14T08:26:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.7 MiB, Avail: 12152.0 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T08:27:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:27:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5287.0 MiB (Base: 4696.9, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:27:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5054.4 MiB, cgroup-rss: 5342.5 MiB, Avail: 11805.7 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:27:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5054.4 MiB, Avail: 12146.8 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:27:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5297.4 MiB (Base: 4707.3, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:27:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5064.8 MiB, cgroup-rss: 5352.8 MiB, Avail: 11795.3 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:27:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.44%, Anon: 5064.8 MiB, Avail: 12136.3 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:27:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:28:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:28:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.9 MiB (Base: 4695.9, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:28:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5053.4 MiB, cgroup-rss: 5341.3 MiB, Avail: 11806.3 MiB, Total: 16859.8 MiB daemon.log:2021-02-14T08:28:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.4 MiB, Avail: 12147.3 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T08:28:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:28:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.5 MiB (Base: 4694.5, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:28:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.6 MiB, cgroup-rss: 5339.9 MiB, Avail: 11808.3 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:28:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.6 MiB, Avail: 12149.3 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T08:29:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5286.0 MiB (Base: 4695.7, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:29:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:29:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.1 MiB, cgroup-rss: 5342.7 MiB, Avail: 11808.2 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T08:29:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.6 MiB, Avail: 12148.1 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T08:29:55.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T08:29:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.0 MiB (Base: 4695.1, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:29:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.3 MiB, cgroup-rss: 5340.4 MiB, Avail: 11807.3 MiB, Total: 16859.5 MiB daemon.log:2021-02-14T08:29:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.5 MiB, Avail: 12148.1 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T08:29:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:30:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:30:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5287.1 MiB (Base: 4696.9, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:30:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5054.3 MiB, cgroup-rss: 5342.3 MiB, Avail: 11804.9 MiB, Total: 16859.2 MiB daemon.log:2021-02-14T08:30:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5054.3 MiB, Avail: 12146.3 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T08:30:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.0 MiB (Base: 4694.7, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:30:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.5 MiB, cgroup-rss: 5340.3 MiB, Avail: 11808.1 MiB, Total: 16859.6 MiB daemon.log:2021-02-14T08:30:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.7 MiB, Avail: 12148.7 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T08:30:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:31:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:31:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.0%; Reserved: 4600.0 MiB, Platform: 5289.6 MiB (Base: 4699.3, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:31:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5055.5 MiB, cgroup-rss: 5344.6 MiB, Avail: 11803.1 MiB, Total: 16858.6 MiB daemon.log:2021-02-14T08:31:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.39%, Anon: 5055.5 MiB, Avail: 12144.1 MiB, Total: 17199.6 MiB daemon.log:2021-02-14T08:31:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:31:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5286.8 MiB (Base: 4696.9, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:31:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5054.9 MiB, cgroup-rss: 5342.6 MiB, Avail: 11804.9 MiB, Total: 16859.7 MiB daemon.log:2021-02-14T08:31:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.39%, Anon: 5055.0 MiB, Avail: 12145.7 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T08:32:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:32:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.0%; Reserved: 4600.0 MiB, Platform: 5289.0 MiB (Base: 4698.6, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:32:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5055.6 MiB, cgroup-rss: 5344.4 MiB, Avail: 11804.1 MiB, Total: 16859.7 MiB daemon.log:2021-02-14T08:32:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.39%, Anon: 5055.6 MiB, Avail: 12145.1 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T08:32:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:32:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.1 MiB (Base: 4691.1, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:32:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.5 MiB, cgroup-rss: 5336.5 MiB, Avail: 11811.8 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T08:32:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.5 MiB, Avail: 12152.8 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:33:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:33:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.2 MiB (Base: 4694.8, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:33:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.9 MiB, cgroup-rss: 5340.6 MiB, Avail: 11808.0 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:33:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.9 MiB, Avail: 12149.0 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T08:33:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:33:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.5 MiB (Base: 4693.1, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:33:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.1 MiB, cgroup-rss: 5339.5 MiB, Avail: 11809.7 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:33:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.9 MiB, Avail: 12150.5 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:34:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:34:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5283.1 MiB (Base: 4692.9, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:34:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.0 MiB, cgroup-rss: 5338.1 MiB, Avail: 11809.8 MiB, Total: 16859.8 MiB daemon.log:2021-02-14T08:34:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.6 MiB, Avail: 12149.5 MiB, Total: 17200.1 MiB daemon.log:2021-02-14T08:34:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:34:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.5 MiB (Base: 4691.7, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:34:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.7 MiB, cgroup-rss: 5336.9 MiB, Avail: 11811.5 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T08:34:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.7 MiB, Avail: 12152.5 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:35:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:35:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.8 MiB (Base: 4692.7, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:35:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.1 MiB, cgroup-rss: 5338.2 MiB, Avail: 11809.8 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:35:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.1 MiB, Avail: 12150.9 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:35:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.8 MiB (Base: 4692.8, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:35:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:35:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5049.4 MiB, cgroup-rss: 5338.3 MiB, Avail: 11810.3 MiB, Total: 16859.7 MiB daemon.log:2021-02-14T08:35:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5049.4 MiB, Avail: 12151.1 MiB, Total: 17200.5 MiB daemon.log:2021-02-14T08:36:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:36:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.8 MiB (Base: 4692.7, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:36:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5049.9 MiB, cgroup-rss: 5338.3 MiB, Avail: 11809.6 MiB, Total: 16859.5 MiB daemon.log:2021-02-14T08:36:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5049.9 MiB, Avail: 12150.6 MiB, Total: 17200.5 MiB daemon.log:2021-02-14T08:36:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:36:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.4 MiB (Base: 4693.1, k8s-system: 590.3), k8s-addon: 0.0 daemon.log:2021-02-14T08:36:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5049.8 MiB, cgroup-rss: 5338.8 MiB, Avail: 11810.1 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:36:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5049.8 MiB, Avail: 12151.1 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T08:37:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:37:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.5 MiB (Base: 4693.6, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:37:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.4 MiB, cgroup-rss: 5339.0 MiB, Avail: 11809.7 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T08:37:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.4 MiB, Avail: 12150.7 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:37:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:37:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.4 MiB (Base: 4694.2, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:37:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.3 MiB, cgroup-rss: 5339.8 MiB, Avail: 11808.4 MiB, Total: 16859.8 MiB daemon.log:2021-02-14T08:37:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.3 MiB, Avail: 12149.4 MiB, Total: 17200.8 MiB daemon.log:2021-02-14T08:38:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:38:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.6 MiB (Base: 4692.4, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:38:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5049.7 MiB, cgroup-rss: 5338.0 MiB, Avail: 11809.9 MiB, Total: 16859.6 MiB daemon.log:2021-02-14T08:38:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5049.7 MiB, Avail: 12150.9 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T08:38:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:38:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5280.4 MiB (Base: 4690.3, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:38:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5047.6 MiB, cgroup-rss: 5335.8 MiB, Avail: 11813.0 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T08:38:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5047.6 MiB, Avail: 12154.0 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T08:39:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:39:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.8 MiB (Base: 4691.9, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:39:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.6 MiB, cgroup-rss: 5337.2 MiB, Avail: 11811.7 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T08:39:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.4 MiB, Avail: 12153.1 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T08:39:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:39:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.3 MiB (Base: 4694.2, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T08:39:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.1 MiB, cgroup-rss: 5339.7 MiB, Avail: 11809.7 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T08:39:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5051.1 MiB, Avail: 12150.8 MiB, Total: 17201.8 MiB daemon.log:2021-02-14T08:40:25.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T08:40:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:40:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.6 MiB (Base: 4694.7, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:40:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.4 MiB, cgroup-rss: 5340.1 MiB, Avail: 11809.2 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:40:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.4 MiB, Avail: 12150.2 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T08:40:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5280.6 MiB (Base: 4690.7, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:40:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:40:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5047.1 MiB, cgroup-rss: 5336.1 MiB, Avail: 11813.5 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T08:40:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.34%, Anon: 5047.1 MiB, Avail: 12154.5 MiB, Total: 17201.7 MiB daemon.log:2021-02-14T08:41:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:41:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.0 MiB (Base: 4691.6, k8s-system: 590.4), k8s-addon: 0.0 daemon.log:2021-02-14T08:41:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.2 MiB, cgroup-rss: 5337.4 MiB, Avail: 11812.3 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:41:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.2 MiB, Avail: 12153.3 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T08:41:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:41:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.3 MiB (Base: 4691.3, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:41:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5047.7 MiB, cgroup-rss: 5336.7 MiB, Avail: 11812.5 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T08:41:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5047.7 MiB, Avail: 12153.5 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:42:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.9 MiB (Base: 4692.1, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:42:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.7 MiB, cgroup-rss: 5337.3 MiB, Avail: 11811.7 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T08:42:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.7 MiB, Avail: 12152.8 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T08:42:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:42:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:42:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.2 MiB (Base: 4691.4, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:42:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.4 MiB, cgroup-rss: 5336.7 MiB, Avail: 11812.3 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T08:42:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.4 MiB, Avail: 12153.3 MiB, Total: 17201.7 MiB daemon.log:2021-02-14T08:43:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:43:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.0 MiB (Base: 4692.1, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:43:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5049.4 MiB, cgroup-rss: 5337.4 MiB, Avail: 11811.4 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T08:43:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5049.1 MiB, Avail: 12152.5 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T08:43:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:43:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5281.8 MiB (Base: 4691.9, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:43:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5049.4 MiB, cgroup-rss: 5337.2 MiB, Avail: 11811.0 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T08:43:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5049.4 MiB, Avail: 12152.1 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T08:44:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:44:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.4 MiB (Base: 4694.3, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:44:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.0 MiB, cgroup-rss: 5341.2 MiB, Avail: 11809.2 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T08:44:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5051.0 MiB, Avail: 12150.3 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:44:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.1 MiB (Base: 4693.2, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:44:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.2 MiB, cgroup-rss: 5338.6 MiB, Avail: 11810.0 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T08:44:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.2 MiB, Avail: 12151.1 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:44:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:45:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.8 MiB (Base: 4694.7, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:45:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.7 MiB, cgroup-rss: 5340.2 MiB, Avail: 11808.3 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T08:45:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.7 MiB, Avail: 12149.4 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:45:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:45:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:45:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.1 MiB (Base: 4695.0, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:45:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.1 MiB, cgroup-rss: 5340.7 MiB, Avail: 11808.0 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:45:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.1 MiB, Avail: 12149.1 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:46:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:46:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.0%; Reserved: 4600.0 MiB, Platform: 5287.8 MiB (Base: 4697.7, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:46:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5054.7 MiB, cgroup-rss: 5343.3 MiB, Avail: 11805.0 MiB, Total: 16859.7 MiB daemon.log:2021-02-14T08:46:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.39%, Anon: 5054.7 MiB, Avail: 12146.1 MiB, Total: 17200.8 MiB daemon.log:2021-02-14T08:46:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:46:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5286.6 MiB (Base: 4696.7, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:46:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5053.6 MiB, cgroup-rss: 5342.2 MiB, Avail: 11806.2 MiB, Total: 16859.8 MiB daemon.log:2021-02-14T08:46:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.6 MiB, Avail: 12147.3 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T08:47:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:47:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.9 MiB (Base: 4695.8, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:47:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.3 MiB, cgroup-rss: 5341.4 MiB, Avail: 11807.8 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:47:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.3 MiB, Avail: 12149.0 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:47:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.9 MiB (Base: 4694.1, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:47:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:47:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.6 MiB, cgroup-rss: 5339.5 MiB, Avail: 11809.3 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:47:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.6 MiB, Avail: 12150.4 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:48:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:48:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.9 MiB (Base: 4694.1, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:48:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.7 MiB, cgroup-rss: 5339.5 MiB, Avail: 11809.2 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:48:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.7 MiB, Avail: 12150.4 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:48:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:48:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.3 MiB (Base: 4693.3, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:48:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.2 MiB, cgroup-rss: 5338.8 MiB, Avail: 11809.7 MiB, Total: 16859.9 MiB daemon.log:2021-02-14T08:48:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.2 MiB, Avail: 12150.8 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:49:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.2 MiB (Base: 4695.2, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:49:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.8 MiB, cgroup-rss: 5340.5 MiB, Avail: 11808.4 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T08:49:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.8 MiB, Avail: 12149.5 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:49:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:49:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.6 MiB (Base: 4693.7, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:49:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.7 MiB, cgroup-rss: 5339.1 MiB, Avail: 11809.6 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T08:49:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.7 MiB, Avail: 12150.1 MiB, Total: 17200.8 MiB daemon.log:2021-02-14T08:49:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:50:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.9 MiB (Base: 4695.1, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:50:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.1 MiB, cgroup-rss: 5342.2 MiB, Avail: 11808.1 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T08:50:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.0 MiB, Avail: 12147.8 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T08:50:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:50:55.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T08:50:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5286.7 MiB (Base: 4696.9, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:50:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5053.9 MiB, cgroup-rss: 5342.3 MiB, Avail: 11806.8 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T08:50:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.9 MiB, Avail: 12147.3 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:50:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:51:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.0 MiB (Base: 4695.1, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:51:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.1 MiB, cgroup-rss: 5340.6 MiB, Avail: 11808.6 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T08:51:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.1 MiB, Avail: 12149.1 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:51:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:51:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:51:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.4 MiB (Base: 4695.4, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:51:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.8 MiB, cgroup-rss: 5340.9 MiB, Avail: 11809.2 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T08:51:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.8 MiB, Avail: 12149.7 MiB, Total: 17201.5 MiB daemon.log:2021-02-14T08:52:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:52:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.5 MiB (Base: 4695.4, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:52:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.5 MiB, cgroup-rss: 5341.1 MiB, Avail: 11808.4 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:52:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.5 MiB, Avail: 12148.9 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:52:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:52:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5286.4 MiB (Base: 4696.3, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:52:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5053.4 MiB, cgroup-rss: 5341.9 MiB, Avail: 11807.5 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:52:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.4 MiB, Avail: 12148.0 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:53:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:53:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.7 MiB (Base: 4694.6, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:53:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.1 MiB, cgroup-rss: 5340.1 MiB, Avail: 11809.8 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:53:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5051.1 MiB, Avail: 12150.2 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:53:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:53:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.4 MiB (Base: 4694.4, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:53:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.1 MiB, cgroup-rss: 5339.8 MiB, Avail: 11809.6 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T08:53:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5051.1 MiB, Avail: 12150.0 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T08:54:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.2 MiB (Base: 4695.4, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:54:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.4 MiB, cgroup-rss: 5340.3 MiB, Avail: 11809.1 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T08:54:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.4 MiB, Avail: 12149.5 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:54:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:54:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:54:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.2 MiB (Base: 4695.3, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:54:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.5 MiB, cgroup-rss: 5340.7 MiB, Avail: 11809.1 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:54:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.5 MiB, Avail: 12149.5 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:55:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:55:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.6 MiB (Base: 4692.6, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:55:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5049.1 MiB, cgroup-rss: 5338.0 MiB, Avail: 11811.5 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:55:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5049.1 MiB, Avail: 12151.9 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T08:55:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:55:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.3 MiB (Base: 4692.6, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T08:55:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5049.0 MiB, cgroup-rss: 5337.8 MiB, Avail: 11811.9 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:55:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5049.0 MiB, Avail: 12152.3 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T08:56:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:56:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.9 MiB (Base: 4693.9, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:56:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.2 MiB, cgroup-rss: 5339.3 MiB, Avail: 11810.7 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:56:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.2 MiB, Avail: 12151.1 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:56:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:56:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5280.9 MiB (Base: 4691.1, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T08:56:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5048.0 MiB, cgroup-rss: 5336.4 MiB, Avail: 11812.9 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:56:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.35%, Anon: 5048.0 MiB, Avail: 12153.4 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:57:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:57:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.8 MiB (Base: 4696.0, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T08:57:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5053.4 MiB, cgroup-rss: 5341.3 MiB, Avail: 11807.2 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:57:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.1 MiB, Avail: 12147.9 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:57:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:57:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5286.2 MiB (Base: 4696.1, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:57:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5053.1 MiB, cgroup-rss: 5341.7 MiB, Avail: 11807.5 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:57:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.1 MiB, Avail: 12148.0 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:58:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:58:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.8 MiB (Base: 4694.8, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T08:58:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.3 MiB, cgroup-rss: 5340.3 MiB, Avail: 11808.3 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T08:58:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.3 MiB, Avail: 12148.8 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T08:58:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:58:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.7 MiB (Base: 4693.7, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T08:58:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.6 MiB, cgroup-rss: 5339.2 MiB, Avail: 11810.5 MiB, Total: 16861.1 MiB daemon.log:2021-02-14T08:58:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.6 MiB, Avail: 12151.0 MiB, Total: 17201.6 MiB daemon.log:2021-02-14T08:59:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:59:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.6 MiB (Base: 4695.5, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T08:59:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.6 MiB, cgroup-rss: 5340.6 MiB, Avail: 11808.3 MiB, Total: 16860.9 MiB daemon.log:2021-02-14T08:59:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.3 MiB, Avail: 12149.1 MiB, Total: 17201.4 MiB daemon.log:2021-02-14T08:59:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T08:59:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.8 MiB (Base: 4696.1, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T08:59:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5053.5 MiB, cgroup-rss: 5341.3 MiB, Avail: 11807.2 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T08:59:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.5 MiB, Avail: 12147.7 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T09:00:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:00:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.2 MiB (Base: 4694.4, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T09:00:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.3 MiB, cgroup-rss: 5339.7 MiB, Avail: 11809.2 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T09:00:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.3 MiB, Avail: 12149.7 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T09:00:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:00:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.3 MiB (Base: 4693.2, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T09:00:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.0 MiB, cgroup-rss: 5338.8 MiB, Avail: 11810.5 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T09:00:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.0 MiB, Avail: 12151.1 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T09:01:25.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T09:01:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:01:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5283.0 MiB (Base: 4693.0, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:01:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.1 MiB, cgroup-rss: 5338.5 MiB, Avail: 11810.3 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T09:01:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.1 MiB, Avail: 12150.9 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T09:01:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.2 MiB (Base: 4693.2, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:01:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.6 MiB, cgroup-rss: 5338.7 MiB, Avail: 11810.1 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T09:01:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.6 MiB, Avail: 12150.6 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T09:01:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:02:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:02:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.3 MiB (Base: 4695.3, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:02:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.9 MiB, cgroup-rss: 5340.8 MiB, Avail: 11807.9 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T09:02:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.9 MiB, Avail: 12148.4 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T09:02:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:02:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.6 MiB (Base: 4692.7, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:02:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5049.8 MiB, cgroup-rss: 5338.1 MiB, Avail: 11810.5 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T09:02:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5049.8 MiB, Avail: 12151.0 MiB, Total: 17200.8 MiB daemon.log:2021-02-14T09:03:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:03:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.0 MiB (Base: 4694.2, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T09:03:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5051.8 MiB, cgroup-rss: 5339.5 MiB, Avail: 11808.8 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T09:03:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.6 MiB, Avail: 12149.6 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T09:03:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:03:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.0 MiB (Base: 4694.0, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:03:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.9 MiB, cgroup-rss: 5339.5 MiB, Avail: 11809.2 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T09:03:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.9 MiB, Avail: 12149.7 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T09:04:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:04:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.0 MiB (Base: 4695.4, k8s-system: 589.6), k8s-addon: 0.0 daemon.log:2021-02-14T09:04:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.2 MiB, cgroup-rss: 5342.0 MiB, Avail: 11807.8 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T09:04:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.2 MiB, Avail: 12148.4 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T09:04:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:04:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5282.9 MiB (Base: 4693.2, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T09:04:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.6 MiB, cgroup-rss: 5338.4 MiB, Avail: 11810.1 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T09:04:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.6 MiB, Avail: 12150.7 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T09:05:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:05:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.7 MiB (Base: 4695.6, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:05:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.5 MiB, cgroup-rss: 5341.2 MiB, Avail: 11807.9 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T09:05:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.5 MiB, Avail: 12148.4 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T09:05:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:05:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.0%; Reserved: 4600.0 MiB, Platform: 5288.7 MiB (Base: 4698.7, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T09:05:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5055.0 MiB, cgroup-rss: 5344.2 MiB, Avail: 11805.5 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T09:05:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.39%, Anon: 5055.0 MiB, Avail: 12146.0 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T09:06:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:06:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.1 MiB (Base: 4694.0, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T09:06:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.5 MiB, cgroup-rss: 5339.6 MiB, Avail: 11810.2 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T09:06:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.5 MiB, Avail: 12150.7 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T09:06:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:06:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.2 MiB (Base: 4694.1, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T09:06:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.7 MiB, cgroup-rss: 5339.7 MiB, Avail: 11809.7 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T09:06:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.7 MiB, Avail: 12150.2 MiB, Total: 17200.9 MiB daemon.log:2021-02-14T09:07:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:07:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.6 MiB (Base: 4695.5, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T09:07:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.2 MiB, cgroup-rss: 5341.1 MiB, Avail: 11808.4 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T09:07:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.3 MiB, Avail: 12149.0 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T09:07:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.8%; Reserved: 4600.0 MiB, Platform: 5283.1 MiB (Base: 4693.2, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T09:07:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5049.8 MiB, cgroup-rss: 5338.6 MiB, Avail: 11810.6 MiB, Total: 16860.5 MiB daemon.log:2021-02-14T09:07:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5049.8 MiB, Avail: 12151.2 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T09:07:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:08:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:08:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.4 MiB (Base: 4693.7, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T09:08:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.8 MiB, cgroup-rss: 5338.9 MiB, Avail: 11809.9 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T09:08:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.8 MiB, Avail: 12150.5 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T09:08:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:08:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5283.6 MiB (Base: 4693.7, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T09:08:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5049.8 MiB, cgroup-rss: 5339.1 MiB, Avail: 11810.4 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T09:08:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5049.8 MiB, Avail: 12150.9 MiB, Total: 17200.8 MiB daemon.log:2021-02-14T09:09:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.6 MiB (Base: 4694.9, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T09:09:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.7 MiB, cgroup-rss: 5339.4 MiB, Avail: 11809.9 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T09:09:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.7 MiB, Avail: 12150.3 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T09:09:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:09:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:09:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.2 MiB (Base: 4695.5, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T09:09:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.0 MiB, cgroup-rss: 5340.6 MiB, Avail: 11808.6 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T09:09:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.0 MiB, Avail: 12149.1 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T09:10:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:10:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.2 MiB (Base: 4694.1, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T09:10:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 29.9%, Anon: 5049.7 MiB, cgroup-rss: 5339.6 MiB, Avail: 11810.9 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T09:10:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5049.7 MiB, Avail: 12151.3 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T09:10:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:10:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5286.6 MiB (Base: 4696.8, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T09:10:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.7 MiB, cgroup-rss: 5342.0 MiB, Avail: 11807.7 MiB, Total: 16860.4 MiB daemon.log:2021-02-14T09:10:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5052.7 MiB, Avail: 12148.1 MiB, Total: 17200.8 MiB daemon.log:2021-02-14T09:11:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:11:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5287.5 MiB (Base: 4697.5, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:11:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5053.5 MiB, cgroup-rss: 5342.9 MiB, Avail: 11806.7 MiB, Total: 16860.2 MiB daemon.log:2021-02-14T09:11:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5053.5 MiB, Avail: 12147.1 MiB, Total: 17200.6 MiB daemon.log:2021-02-14T09:11:55.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T09:11:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:11:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.3 MiB (Base: 4694.6, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T09:11:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.6 MiB, cgroup-rss: 5339.7 MiB, Avail: 11810.1 MiB, Total: 16860.7 MiB daemon.log:2021-02-14T09:11:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.6 MiB, Avail: 12150.6 MiB, Total: 17201.1 MiB daemon.log:2021-02-14T09:12:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:12:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5287.4 MiB (Base: 4697.9, k8s-system: 589.6), k8s-addon: 0.0 daemon.log:2021-02-14T09:12:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5054.1 MiB, cgroup-rss: 5342.9 MiB, Avail: 11806.7 MiB, Total: 16860.8 MiB daemon.log:2021-02-14T09:12:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.38%, Anon: 5054.1 MiB, Avail: 12147.1 MiB, Total: 17201.2 MiB daemon.log:2021-02-14T09:12:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:12:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5284.8 MiB (Base: 4694.8, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:12:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5050.5 MiB, cgroup-rss: 5340.2 MiB, Avail: 11810.1 MiB, Total: 16860.6 MiB daemon.log:2021-02-14T09:12:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.36%, Anon: 5050.5 MiB, Avail: 12150.5 MiB, Total: 17201.0 MiB daemon.log:2021-02-14T09:13:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.9%; Reserved: 4600.0 MiB, Platform: 5285.4 MiB (Base: 4695.8, k8s-system: 589.6), k8s-addon: 0.0 daemon.log:2021-02-14T09:13:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5052.0 MiB, cgroup-rss: 5340.8 MiB, Avail: 11808.9 MiB, Total: 16861.0 MiB daemon.log:2021-02-14T09:13:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:13:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.37%, Anon: 5051.9 MiB, Avail: 12149.4 MiB, Total: 17201.3 MiB daemon.log:2021-02-14T09:13:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:13:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.0%; Reserved: 4600.0 MiB, Platform: 5288.1 MiB (Base: 4698.5, k8s-system: 589.6), k8s-addon: 0.0 daemon.log:2021-02-14T09:13:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5054.6 MiB, cgroup-rss: 5343.5 MiB, Avail: 11805.7 MiB, Total: 16860.3 MiB daemon.log:2021-02-14T09:13:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.39%, Anon: 5054.6 MiB, Avail: 12146.1 MiB, Total: 17200.7 MiB daemon.log:2021-02-14T09:14:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:14:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.1%; Reserved: 4600.0 MiB, Platform: 5292.7 MiB (Base: 4702.7, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:14:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5059.2 MiB, cgroup-rss: 5350.0 MiB, Avail: 11800.8 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T09:14:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.41%, Anon: 5059.2 MiB, Avail: 12141.2 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T09:14:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:14:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5304.8 MiB (Base: 4715.1, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T09:14:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5071.1 MiB, cgroup-rss: 5360.2 MiB, Avail: 11788.3 MiB, Total: 16859.4 MiB daemon.log:2021-02-14T09:14:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.48%, Anon: 5071.1 MiB, Avail: 12128.7 MiB, Total: 17199.9 MiB daemon.log:2021-02-14T09:15:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:15:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5302.8 MiB (Base: 4713.0, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T09:15:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5069.0 MiB, cgroup-rss: 5358.2 MiB, Avail: 11790.5 MiB, Total: 16859.4 MiB daemon.log:2021-02-14T09:15:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.47%, Anon: 5069.0 MiB, Avail: 12130.9 MiB, Total: 17199.9 MiB daemon.log:2021-02-14T09:15:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:15:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.1%; Reserved: 4600.0 MiB, Platform: 5295.6 MiB (Base: 4705.5, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T09:15:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5061.3 MiB, cgroup-rss: 5351.0 MiB, Avail: 11798.7 MiB, Total: 16860.1 MiB daemon.log:2021-02-14T09:15:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.43%, Anon: 5061.3 MiB, Avail: 12139.1 MiB, Total: 17200.5 MiB daemon.log:2021-02-14T09:16:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:16:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.1%; Reserved: 4600.0 MiB, Platform: 5295.1 MiB (Base: 4705.5, k8s-system: 589.6), k8s-addon: 0.0 daemon.log:2021-02-14T09:16:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5061.8 MiB, cgroup-rss: 5350.5 MiB, Avail: 11798.2 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T09:16:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.43%, Anon: 5061.8 MiB, Avail: 12138.6 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T09:16:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:16:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5300.5 MiB (Base: 4710.7, k8s-system: 589.7), k8s-addon: 0.0 daemon.log:2021-02-14T09:16:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5066.5 MiB, cgroup-rss: 5355.9 MiB, Avail: 11793.0 MiB, Total: 16859.5 MiB daemon.log:2021-02-14T09:16:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.46%, Anon: 5066.5 MiB, Avail: 12133.4 MiB, Total: 17199.9 MiB daemon.log:2021-02-14T09:17:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:17:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5298.8 MiB (Base: 4708.7, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:17:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5065.4 MiB, cgroup-rss: 5354.2 MiB, Avail: 11793.8 MiB, Total: 16859.2 MiB daemon.log:2021-02-14T09:17:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.45%, Anon: 5065.4 MiB, Avail: 12134.2 MiB, Total: 17199.6 MiB daemon.log:2021-02-14T09:17:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.1%; Reserved: 4600.0 MiB, Platform: 5296.8 MiB (Base: 4707.2, k8s-system: 589.6), k8s-addon: 0.0 daemon.log:2021-02-14T09:17:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:17:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5063.8 MiB, cgroup-rss: 5352.3 MiB, Avail: 11796.2 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T09:17:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.44%, Anon: 5064.0 MiB, Avail: 12136.4 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T09:18:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:18:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5301.1 MiB (Base: 4711.2, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T09:18:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5067.7 MiB, cgroup-rss: 5356.5 MiB, Avail: 11792.3 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T09:18:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.46%, Anon: 5067.7 MiB, Avail: 12132.7 MiB, Total: 17200.4 MiB daemon.log:2021-02-14T09:18:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:18:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5303.8 MiB (Base: 4713.8, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:18:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5070.1 MiB, cgroup-rss: 5359.2 MiB, Avail: 11789.9 MiB, Total: 16860.0 MiB daemon.log:2021-02-14T09:18:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.48%, Anon: 5070.1 MiB, Avail: 12130.3 MiB, Total: 17200.5 MiB daemon.log:2021-02-14T09:19:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:19:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5302.4 MiB (Base: 4712.4, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:19:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5068.1 MiB, cgroup-rss: 5358.7 MiB, Avail: 11791.6 MiB, Total: 16859.7 MiB daemon.log:2021-02-14T09:19:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.47%, Anon: 5068.1 MiB, Avail: 12132.1 MiB, Total: 17200.1 MiB daemon.log:2021-02-14T09:19:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:19:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5297.5 MiB (Base: 4707.6, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T09:19:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.0%, Anon: 5064.1 MiB, cgroup-rss: 5352.9 MiB, Avail: 11795.3 MiB, Total: 16859.5 MiB daemon.log:2021-02-14T09:19:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.44%, Anon: 5064.1 MiB, Avail: 12135.8 MiB, Total: 17199.9 MiB daemon.log:2021-02-14T09:20:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:20:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5301.3 MiB (Base: 4711.2, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:20:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5067.7 MiB, cgroup-rss: 5356.7 MiB, Avail: 11789.5 MiB, Total: 16857.3 MiB daemon.log:2021-02-14T09:20:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.47%, Anon: 5067.7 MiB, Avail: 12130.2 MiB, Total: 17197.9 MiB daemon.log:2021-02-14T09:20:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:20:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5301.2 MiB (Base: 4711.3, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T09:20:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5067.8 MiB, cgroup-rss: 5356.6 MiB, Avail: 11789.0 MiB, Total: 16856.8 MiB daemon.log:2021-02-14T09:20:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.47%, Anon: 5067.8 MiB, Avail: 12129.7 MiB, Total: 17197.5 MiB daemon.log:2021-02-14T09:21:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:21:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5302.1 MiB (Base: 4712.2, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T09:21:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5068.6 MiB, cgroup-rss: 5357.5 MiB, Avail: 11788.0 MiB, Total: 16856.6 MiB daemon.log:2021-02-14T09:21:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.47%, Anon: 5068.6 MiB, Avail: 12128.7 MiB, Total: 17197.3 MiB daemon.log:2021-02-14T09:21:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:21:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5302.4 MiB (Base: 4712.2, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T09:21:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5068.8 MiB, cgroup-rss: 5357.8 MiB, Avail: 11788.1 MiB, Total: 16856.9 MiB daemon.log:2021-02-14T09:21:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.47%, Anon: 5068.8 MiB, Avail: 12128.8 MiB, Total: 17197.7 MiB daemon.log:2021-02-14T09:22:25.000 controller-0 collectd[130887]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"assert","resource":"memory_platform"} daemon.log:2021-02-14T09:22:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:22:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5302.5 MiB (Base: 4712.7, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T09:22:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5069.4 MiB, cgroup-rss: 5358.0 MiB, Avail: 11787.2 MiB, Total: 16856.6 MiB daemon.log:2021-02-14T09:22:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.48%, Anon: 5069.4 MiB, Avail: 12128.0 MiB, Total: 17197.4 MiB daemon.log:2021-02-14T09:22:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:22:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5302.9 MiB (Base: 4712.9, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T09:22:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5069.6 MiB, cgroup-rss: 5358.4 MiB, Avail: 11786.5 MiB, Total: 16856.1 MiB daemon.log:2021-02-14T09:22:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.48%, Anon: 5069.6 MiB, Avail: 12127.3 MiB, Total: 17196.9 MiB daemon.log:2021-02-14T09:23:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:23:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5302.8 MiB (Base: 4712.7, k8s-system: 590.2), k8s-addon: 0.0 daemon.log:2021-02-14T09:23:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5070.2 MiB, cgroup-rss: 5358.3 MiB, Avail: 11786.6 MiB, Total: 16856.8 MiB daemon.log:2021-02-14T09:23:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.48%, Anon: 5070.1 MiB, Avail: 12127.4 MiB, Total: 17197.5 MiB daemon.log:2021-02-14T09:23:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:23:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5301.0 MiB (Base: 4711.0, k8s-system: 590.1), k8s-addon: 0.0 daemon.log:2021-02-14T09:23:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5068.2 MiB, cgroup-rss: 5356.4 MiB, Avail: 11788.6 MiB, Total: 16856.8 MiB daemon.log:2021-02-14T09:23:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.47%, Anon: 5068.2 MiB, Avail: 12129.3 MiB, Total: 17197.6 MiB daemon.log:2021-02-14T09:24:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:24:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5301.0 MiB (Base: 4711.2, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T09:24:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5068.0 MiB, cgroup-rss: 5356.1 MiB, Avail: 11788.9 MiB, Total: 16856.9 MiB daemon.log:2021-02-14T09:24:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.48%, Anon: 5068.9 MiB, Avail: 12127.9 MiB, Total: 17196.8 MiB daemon.log:2021-02-14T09:24:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5300.1 MiB (Base: 4710.3, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T09:24:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:24:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5067.8 MiB, cgroup-rss: 5355.5 MiB, Avail: 11789.2 MiB, Total: 16857.0 MiB daemon.log:2021-02-14T09:24:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.47%, Anon: 5067.8 MiB, Avail: 12130.1 MiB, Total: 17197.8 MiB daemon.log:2021-02-14T09:25:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:25:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5303.5 MiB (Base: 4713.7, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T09:25:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5070.6 MiB, cgroup-rss: 5358.9 MiB, Avail: 11786.2 MiB, Total: 16856.8 MiB daemon.log:2021-02-14T09:25:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.48%, Anon: 5070.6 MiB, Avail: 12127.1 MiB, Total: 17197.7 MiB daemon.log:2021-02-14T09:25:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:25:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.2%; Reserved: 4600.0 MiB, Platform: 5301.3 MiB (Base: 4711.4, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T09:25:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5068.2 MiB, cgroup-rss: 5356.7 MiB, Avail: 11787.6 MiB, Total: 16855.8 MiB daemon.log:2021-02-14T09:25:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.47%, Anon: 5068.2 MiB, Avail: 12128.5 MiB, Total: 17196.7 MiB daemon.log:2021-02-14T09:26:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5303.8 MiB (Base: 4714.0, k8s-system: 589.8), k8s-addon: 0.0 daemon.log:2021-02-14T09:26:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5070.1 MiB, cgroup-rss: 5359.2 MiB, Avail: 11785.6 MiB, Total: 16855.7 MiB daemon.log:2021-02-14T09:26:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.48%, Anon: 5070.1 MiB, Avail: 12126.5 MiB, Total: 17196.5 MiB daemon.log:2021-02-14T09:26:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:26:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.4%; Reserved: 4600.0 MiB, Platform: 5309.1 MiB (Base: 4719.2, k8s-system: 589.9), k8s-addon: 0.0 daemon.log:2021-02-14T09:26:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5076.1 MiB, cgroup-rss: 5364.5 MiB, Avail: 11779.9 MiB, Total: 16856.0 MiB daemon.log:2021-02-14T09:26:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.52%, Anon: 5076.1 MiB, Avail: 12120.8 MiB, Total: 17196.8 MiB daemon.log:2021-02-14T09:26:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:27:25.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5304.8 MiB (Base: 4714.8, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:27:25.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:27:25.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5072.0 MiB, cgroup-rss: 5360.2 MiB, Avail: 11783.5 MiB, Total: 16855.5 MiB daemon.log:2021-02-14T09:27:25.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.49%, Anon: 5072.0 MiB, Avail: 12124.3 MiB, Total: 17196.3 MiB daemon.log:2021-02-14T09:27:55.000 controller-0 collectd[130887]: info alarm notifier host=controller-0 reported no value (Host controller-0, plugin memory (instance platform) type percent (instance used): All data sources are within range again. Current value of "value" is nan.) daemon.log:2021-02-14T09:27:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 115.3%; Reserved: 4600.0 MiB, Platform: 5303.9 MiB (Base: 4713.9, k8s-system: 590.0), k8s-addon: 0.0 daemon.log:2021-02-14T09:27:55.000 controller-0 collectd[130887]: info 4K memory usage: Anon: 30.1%, Anon: 5070.6 MiB, cgroup-rss: 5359.3 MiB, Avail: 11785.0 MiB, Total: 16855.6 MiB daemon.log:2021-02-14T09:27:55.000 controller-0 collectd[130887]: info 4K numa memory usage: node0, Anon: 29.49%, Anon: 5070.6 MiB, Avail: 12125.8 MiB, Total: 17196.4 MiB dcmanager/dcmanager.log:2021-02-10 17:04:26.302 112415 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. dcmanager/dcmanager.log:2021-02-10 17:04:26.302 112414 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. dcorch/dcorch.log:2021-02-10 09:17:02.796 112693 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. dcorch/dcorch.log:2021-02-10 09:18:41.627 112694 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. dcorch/dcorch.log:2021-02-10 17:04:26.301 112681 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. dcorch/dcorch.log:2021-02-10 17:04:31.280 112682 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. dmesg:[ 0.000000] Base memory trampoline at [(____ptrval____)] 99000 size 24576 dmesg:[ 0.000000] Reserving 160MB of memory at 1872MB for crashkernel (System RAM: 18431MB) dmesg:[ 0.000000] Early memory node ranges dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0x7ffde000-0x7fffffff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0x80000000-0xafffffff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0xb0000000-0xbfffffff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfed1bfff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0xfed20000-0xfeffbfff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] dmesg:[ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] dmesg:[ 0.000000] Memory: 1978384K/18873840K available (12292K kernel code, 1354K rwdata, 3652K rodata, 2120K init, 5364K bss, 576936K reserved, 0K cma-reserved) dmesg:[ 0.029346] Freeing SMP alternatives memory: 28K dmesg:[ 0.625524] Freeing initrd memory: 19852K dmesg:[ 0.725653] Non-volatile memory driver v1.3 dmesg:[ 0.791937] Freeing unused decrypted memory: 2040K dmesg:[ 0.793656] Freeing unused kernel memory: 2120K dmesg:[ 0.814192] Freeing unused kernel memory: 2020K dmesg:[ 0.817872] Freeing unused kernel memory: 444K dmesg.old:[ 0.000000] Base memory trampoline at [(____ptrval____)] 99000 size 24576 dmesg.old:[ 0.000000] Reserving 160MB of memory at 1872MB for crashkernel (System RAM: 18431MB) dmesg.old:[ 0.000000] Early memory node ranges dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0x7ffde000-0x7fffffff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0x80000000-0xafffffff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0xb0000000-0xbfffffff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfed1bfff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0xfed20000-0xfeffbfff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] dmesg.old:[ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] dmesg.old:[ 0.000000] Memory: 1978384K/18873840K available (12292K kernel code, 1354K rwdata, 3652K rodata, 2120K init, 5364K bss, 576936K reserved, 0K cma-reserved) dmesg.old:[ 0.016788] Freeing SMP alternatives memory: 28K dmesg.old:[ 0.586654] Freeing initrd memory: 19852K dmesg.old:[ 0.692041] Non-volatile memory driver v1.3 dmesg.old:[ 0.758423] Freeing unused decrypted memory: 2040K dmesg.old:[ 0.760082] Freeing unused kernel memory: 2120K dmesg.old:[ 0.769929] Freeing unused kernel memory: 2020K dmesg.old:[ 0.771327] Freeing unused kernel memory: 444K fm-event.log:2021-02-10T08:57:25.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 80.00%, actual 88.83%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "major", "state" : "set", "timestamp" : "2021-02-10 08:57:25.484131" } fm-event.log:2021-02-10T09:17:55.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "critical", "state" : "set", "timestamp" : "2021-02-10 09:17:55.482154" } guestServer.log:2021-02-10T08:40:17.712 1| Instances:0 Allocs:0 Memory:0 hbsAgent.log:2021-02-10T08:54:26.672 [105134.00040] controller-0 hbsAgent --- msgClass.cpp ( 867) setSocketMemory : Info : Setting enp7s2 rx pulse socket memory to 425984 bytes kern.log:2021-02-10T06:27:35.826 localhost kernel: debug [ 0.000000] Base memory trampoline at [(____ptrval____)] 99000 size 24576 kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] Reserving 160MB of memory at 1872MB for crashkernel (System RAM: 18431MB) kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] Early memory node ranges kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff] kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x7ffde000-0x7fffffff] kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x80000000-0xafffffff] kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xb0000000-0xbfffffff] kern.log:2021-02-10T06:27:35.827 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfed1bfff] kern.log:2021-02-10T06:27:35.828 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff] kern.log:2021-02-10T06:27:35.828 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xfed20000-0xfeffbfff] kern.log:2021-02-10T06:27:35.828 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] kern.log:2021-02-10T06:27:35.828 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] kern.log:2021-02-10T06:27:35.828 localhost kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] kern.log:2021-02-10T06:27:35.828 localhost kernel: info [ 0.000000] Memory: 1978384K/18873840K available (12292K kernel code, 1354K rwdata, 3652K rodata, 2120K init, 5364K bss, 576936K reserved, 0K cma-reserved) kern.log:2021-02-10T06:27:35.829 localhost kernel: info [ 0.016788] Freeing SMP alternatives memory: 28K kern.log:2021-02-10T06:27:35.831 localhost kernel: info [ 0.586654] Freeing initrd memory: 19852K kern.log:2021-02-10T06:27:35.831 localhost kernel: info [ 0.692041] Non-volatile memory driver v1.3 kern.log:2021-02-10T06:27:35.832 localhost kernel: info [ 0.758423] Freeing unused decrypted memory: 2040K kern.log:2021-02-10T06:27:35.832 localhost kernel: info [ 0.760082] Freeing unused kernel memory: 2120K kern.log:2021-02-10T06:27:35.832 localhost kernel: info [ 0.769929] Freeing unused kernel memory: 2020K kern.log:2021-02-10T06:27:35.832 localhost kernel: info [ 0.771327] Freeing unused kernel memory: 444K kern.log:2021-02-10T07:22:56.008 localhost kernel: info [ 3323.264891] IPVS: Connection hash table configured (size=4096, memory=64Kbytes) kern.log:2021-02-10T08:41:55.553 controller-0 kernel: debug [ 0.000000] Base memory trampoline at [(____ptrval____)] 99000 size 24576 kern.log:2021-02-10T08:41:55.554 controller-0 kernel: info [ 0.000000] Reserving 160MB of memory at 1872MB for crashkernel (System RAM: 18431MB) kern.log:2021-02-10T08:41:55.554 controller-0 kernel: info [ 0.000000] Early memory node ranges kern.log:2021-02-10T08:41:55.554 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff] kern.log:2021-02-10T08:41:55.554 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff] kern.log:2021-02-10T08:41:55.554 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000effff] kern.log:2021-02-10T08:41:55.554 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x000f0000-0x000fffff] kern.log:2021-02-10T08:41:55.554 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x7ffde000-0x7fffffff] kern.log:2021-02-10T08:41:55.554 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0x80000000-0xafffffff] kern.log:2021-02-10T08:41:55.554 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xb0000000-0xbfffffff] kern.log:2021-02-10T08:41:55.555 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xc0000000-0xfed1bfff] kern.log:2021-02-10T08:41:55.555 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff] kern.log:2021-02-10T08:41:55.555 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xfed20000-0xfeffbfff] kern.log:2021-02-10T08:41:55.555 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] kern.log:2021-02-10T08:41:55.555 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xff000000-0xfffbffff] kern.log:2021-02-10T08:41:55.555 controller-0 kernel: info [ 0.000000] PM: Registered nosave memory: [mem 0xfffc0000-0xffffffff] kern.log:2021-02-10T08:41:55.555 controller-0 kernel: info [ 0.000000] Memory: 1978384K/18873840K available (12292K kernel code, 1354K rwdata, 3652K rodata, 2120K init, 5364K bss, 576936K reserved, 0K cma-reserved) kern.log:2021-02-10T08:41:55.556 controller-0 kernel: info [ 0.029346] Freeing SMP alternatives memory: 28K kern.log:2021-02-10T08:41:55.557 controller-0 kernel: info [ 0.625524] Freeing initrd memory: 19852K kern.log:2021-02-10T08:41:55.558 controller-0 kernel: info [ 0.725653] Non-volatile memory driver v1.3 kern.log:2021-02-10T08:41:55.558 controller-0 kernel: info [ 0.791937] Freeing unused decrypted memory: 2040K kern.log:2021-02-10T08:41:55.558 controller-0 kernel: info [ 0.793656] Freeing unused kernel memory: 2120K kern.log:2021-02-10T08:41:55.558 controller-0 kernel: info [ 0.814192] Freeing unused kernel memory: 2020K kern.log:2021-02-10T08:41:55.558 controller-0 kernel: info [ 0.817872] Freeing unused kernel memory: 444K kern.log:2021-02-10T08:54:19.242 controller-0 kernel: info [ 746.531346] IPVS: Connection hash table configured (size=4096, memory=64Kbytes) keystone/keystone-all.log:2021-02-10 07:11:04.298 80860 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option.: error: [Errno 11] Resource temporarily unavailable keystone/keystone-all.log:2021-02-10 07:11:09.842 80861 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option.: error: [Errno 11] Resource temporarily unavailable keystone/keystone-all.log:2021-02-10 07:25:44.768 133865 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option.: error: [Errno 11] Resource temporarily unavailable keystone/keystone-all.log:2021-02-10 07:25:50.268 133866 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option.: error: [Errno 11] Resource temporarily unavailable keystone/keystone-all.log:2021-02-10 08:54:10.730 110633 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option.: error: [Errno 11] Resource temporarily unavailable keystone/keystone-all.log:2021-02-10 08:54:20.635 110632 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option.: error: [Errno 11] Resource temporarily unavailable keystone/keystone-all.log:2021-02-10 09:17:02.796 112693 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 09:17:04.139 104884 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 09:18:41.583 104883 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 09:18:41.627 112694 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 09:18:42.237 104882 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 09:30:43.845 104881 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 16:16:26.915 104880 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 17:04:26.301 112681 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 17:04:26.302 112415 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 17:04:26.302 112414 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. keystone/keystone-all.log:2021-02-10 17:04:31.280 112682 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. mtcAgent.log:2021-02-10T07:23:06.209 [134046.00078] controller-0 mtcAgent --- msgClass.cpp ( 867) setSocketMemory : Info : Setting lo mtce command and event receiver (Mgmnt network) to 425984 bytes mtcAgent.log:2021-02-10T08:54:13.237 [111342.00087] controller-0 mtcAgent --- msgClass.cpp ( 867) setSocketMemory : Info : Setting enp7s2 mtce command and event receiver (Mgmnt network) to 425984 bytes mtcAgent.log:2021-02-10T09:17:55.482 [111342.00167] controller-0 mtcAgent hbs nodeClass.cpp (5229) collectd_notify_handler : Info : controller-0 collectd degrade state change ; clear -> assert (due to memory_platform) platform.log:2021-02-10T08:54:27.442 controller-0 sm-api warning 105295 keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. pods/kube-system_kube-controller-manager-controller-0_beb4cf3721fe7ab7384230d84f609a39/kube-controller-manager/0.log:2021-02-10T07:22:32.801014141Z stderr F I0210 07:22:32.800882 1 serving.go:313] Generated self-signed cert in-memory pods/kube-system_kube-controller-manager-controller-0_beb4cf3721fe7ab7384230d84f609a39/kube-controller-manager/1.log:2021-02-10T08:53:02.301677043Z stderr F I0210 08:53:02.301499 1 serving.go:313] Generated self-signed cert in-memory pods/kube-system_kube-controller-manager-controller-0_beb4cf3721fe7ab7384230d84f609a39/kube-controller-manager/1.log:2021-02-10T16:45:00.742032253Z stderr F E0210 16:45:00.741872 1 daemon_controller.go:292] kube-system/kube-multus-ds-amd64 failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-multus-ds-amd64", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-multus-ds-amd64", UID:"5a742b08-6a54-471f-a847-347c4db3ce87", ResourceVersion:"119800", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63748538560, loc:(*time.Location)(0x6d06200)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"multus", "name":"multus", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"multus\",\"name\":\"multus\",\"tier\":\"node\"},\"name\":\"kube-multus-ds-amd64\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"name\":\"multus\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"multus\",\"name\":\"multus\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"command\":[\"/bin/bash\",\"-cex\",\"#!/bin/bash\\nsed \\\"s|__KUBERNETES_NODE_NAME__|${KUBERNETES_NODE_NAME}|g\\\" /tmp/multus-conf/05-multus.conf \\u003e /usr/src/multus-cni/images/05-multus.conf\\n/entrypoint.sh --multus-conf-file=/usr/src/multus-cni/images/05-multus.conf\\n\"],\"env\":[{\"name\":\"KUBERNETES_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}}],\"image\":\"registry.local:9001/docker.io/nfvpe/multus:v3.4\",\"name\":\"kube-multus\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"50m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/host/etc/cni/net.d\",\"name\":\"cni\"},{\"mountPath\":\"/host/opt/cni/bin\",\"name\":\"cnibin\"},{\"mountPath\":\"/tmp/multus-conf\",\"name\":\"multus-cfg\"}]}],\"hostNetwork\":true,\"imagePullSecrets\":[{\"name\":\"registry-local-secret\"}],\"nodeSelector\":{\"kubernetes.io/arch\":\"amd64\"},\"serviceAccountName\":\"multus\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni\"},{\"hostPath\":{\"path\":\"/usr/libexec/cni\"},\"name\":\"cnibin\"},{\"configMap\":{\"items\":[{\"key\":\"cni-conf.json\",\"path\":\"05-multus.conf\"}],\"name\":\"multus-cni-config\"},\"name\":\"multus-cfg\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":1},\"type\":\"RollingUpdate\"}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0025bd800), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025bd820)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0025bd840), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025bd860)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0025bd880), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"multus", "name":"multus", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0025bd8a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"cnibin", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0025bd8c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"multus-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00246bdc0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-multus", Image:"registry.local:9001/docker.io/nfvpe/multus:v3.4", Command:[]string{"/bin/bash", "-cex", "#!/bin/bash\nsed \"s|__KUBERNETES_NODE_NAME__|${KUBERNETES_NODE_NAME}|g\" /tmp/multus-conf/05-multus.conf > /usr/src/multus-cni/images/05-multus.conf\n/entrypoint.sh --multus-conf-file=/usr/src/multus-cni/images/05-multus.conf\n"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"KUBERNETES_NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0025bd900)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:50, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni", ReadOnly:false, MountPath:"/host/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"cnibin", ReadOnly:false, MountPath:"/host/opt/cni/bin", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"multus-cfg", ReadOnly:false, MountPath:"/tmp/multus-conf", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0023facd0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0019e9f08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/arch":"amd64"}, ServiceAccountName:"multus", DeprecatedServiceAccount:"multus", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0027398f0), ImagePullSecrets:[]v1.LocalObjectReference{v1.LocalObjectReference{Name:"registry-local-secret"}}, Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000516e98)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0019e9f4c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-multus-ds-amd64": the object has been modified; please apply your changes to the latest version and try again pods/kube-system_kube-scheduler-controller-0_04abb2ef72685c7615231f0f216c924e/kube-scheduler/0.log:2021-02-10T07:22:32.617980342Z stderr F I0210 07:22:32.617263 1 serving.go:313] Generated self-signed cert in-memory pods/kube-system_kube-scheduler-controller-0_04abb2ef72685c7615231f0f216c924e/kube-scheduler/1.log:2021-02-10T08:53:02.268796913Z stderr F I0210 08:53:02.266990 1 serving.go:313] Generated self-signed cert in-memory pods/armada_armada-api-c8497f9b6-twvs6_8b378ff3-d7cd-489e-8854-ec6c7711cc7e/armada-api/0.log:2021-02-10T07:23:10.571628837Z stderr F your memory page size is 4096 bytes pods/kube-system_calico-node-nq8j2_50627d05-79a2-48c1-97f5-4b1c257f214d/calico-node/11.log:2021-02-10T09:21:04.356871607Z stdout F 2021-02-10 09:21:04.355 [INFO][56] daemon.go 293: Successfully loaded configuration. GOMAXPROCS=6 builddate="f96a2f07b29687e0580c148895d180eb0f7a2954" config=&config.Config{UseInternalDataplaneDriver:true, DataplaneDriver:"calico-iptables-plugin", DatastoreType:"kubernetes", FelixHostname:"controller-0", EtcdAddr:"127.0.0.1:2379", EtcdScheme:"http", EtcdKeyFile:"", EtcdCertFile:"", EtcdCaFile:"", EtcdEndpoints:[]string(nil), TyphaAddr:"", TyphaK8sServiceName:"", TyphaK8sNamespace:"kube-system", TyphaReadTimeout:30000000000, TyphaWriteTimeout:10000000000, TyphaKeyFile:"", TyphaCertFile:"", TyphaCAFile:"", TyphaCN:"", TyphaURISAN:"", Ipv6Support:false, IptablesBackend:"legacy", RouteRefreshInterval:90000000000, DeviceRouteSourceAddress:net.IP(nil), DeviceRouteProtocol:3, RemoveExternalRoutes:true, IptablesRefreshInterval:90000000000, IptablesPostWriteCheckIntervalSecs:1000000000, IptablesLockFilePath:"/run/xtables.lock", IptablesLockTimeoutSecs:0, IptablesLockProbeIntervalMillis:50000000, IpsetsRefreshInterval:10000000000, MaxIpsetSize:1048576, XDPRefreshInterval:90000000000, PolicySyncPathPrefix:"", NetlinkTimeoutSecs:10000000000, MetadataAddr:"", MetadataPort:8775, OpenstackRegion:"", InterfacePrefix:"cali", InterfaceExclude:[]*regexp.Regexp{(*regexp.Regexp)(0xc0006ec5a0)}, ChainInsertMode:"insert", DefaultEndpointToHostAction:"ACCEPT", IptablesFilterAllowAction:"ACCEPT", IptablesMangleAllowAction:"ACCEPT", LogPrefix:"calico-packet", LogFilePath:"", LogSeverityFile:"", LogSeverityScreen:"INFO", LogSeveritySys:"", VXLANEnabled:false, VXLANPort:4789, VXLANVNI:4096, VXLANMTU:1410, IPv4VXLANTunnelAddr:net.IP(nil), VXLANTunnelMACAddr:"", IpInIpEnabled:true, IpInIpMtu:1440, IpInIpTunnelAddr:net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0xac, 0x10, 0xc0, 0x40}, ReportingIntervalSecs:0, ReportingTTLSecs:90000000000, EndpointReportingEnabled:false, EndpointReportingDelaySecs:1000000000, IptablesMarkMask:0xffff0000, DisableConntrackInvalidCheck:false, HealthEnabled:true, HealthPort:9099, HealthHost:"localhost", PrometheusMetricsEnabled:false, PrometheusMetricsHost:"", PrometheusMetricsPort:9091, PrometheusGoMetricsEnabled:true, PrometheusProcessMetricsEnabled:true, FailsafeInboundHostPorts:[]config.ProtoPort{config.ProtoPort{Protocol:"tcp", Port:0x16}, config.ProtoPort{Protocol:"udp", Port:0x44}, config.ProtoPort{Protocol:"tcp", Port:0xb3}, config.ProtoPort{Protocol:"tcp", Port:0x192b}}, FailsafeOutboundHostPorts:[]config.ProtoPort{config.ProtoPort{Protocol:"udp", Port:0x35}, config.ProtoPort{Protocol:"udp", Port:0x43}, config.ProtoPort{Protocol:"tcp", Port:0xb3}, config.ProtoPort{Protocol:"tcp", Port:0x192b}}, KubeNodePortRanges:[]numorstring.Port{numorstring.Port{MinPort:0x7530, MaxPort:0x7fff, PortName:""}}, NATPortRange:numorstring.Port{MinPort:0x0, MaxPort:0x0, PortName:""}, NATOutgoingAddress:net.IP(nil), UsageReportingEnabled:true, UsageReportingInitialDelaySecs:300000000000, UsageReportingIntervalSecs:86400000000000, ClusterGUID:"934e9f497ae94de3b2e2d3b6332db817", ClusterType:"k8s,bgp,kdd", CalicoVersion:"v3.12.0", ExternalNodesCIDRList:[]string(nil), DebugMemoryProfilePath:"", DebugCPUProfilePath:"/tmp/felix-cpu-.pprof", DebugDisableLogDropping:false, DebugSimulateCalcGraphHangAfter:0, DebugSimulateDataplaneHangAfter:0, sourceToRawConfig:map[config.Source]map[string]string{0x1:map[string]string{"CalicoVersion":"v3.12.0", "ClusterGUID":"934e9f497ae94de3b2e2d3b6332db817", "ClusterType":"k8s,bgp,kdd", "IpInIpEnabled":"true", "LogSeverityScreen":"Info", "ReportingIntervalSecs":"0"}, 0x2:map[string]string{"IpInIpTunnelAddr":"172.16.192.64"}, 0x3:map[string]string{"LogFilePath":"None", "LogSeverityFile":"None", "LogSeveritySys":"None", "MetadataAddr":"None"}, 0x4:map[string]string{"datastoretype":"kubernetes", "defaultendpointtohostaction":"ACCEPT", "failsafeinboundhostports":"tcp:22, udp:68, tcp:179, tcp:6443", "failsafeoutboundhostports":"udp:53, udp:67, tcp:179, tcp:6443", "felixhostname":"controller-0", "healthenabled":"true", "ipinipmtu":"1440", "ipv6support":"false", "logseverityscreen":"info"}}, rawValues:map[string]string{"CalicoVersion":"v3.12.0", "ClusterGUID":"934e9f497ae94de3b2e2d3b6332db817", "ClusterType":"k8s,bgp,kdd", "DatastoreType":"kubernetes", "DefaultEndpointToHostAction":"ACCEPT", "FailsafeInboundHostPorts":"tcp:22, udp:68, tcp:179, tcp:6443", "FailsafeOutboundHostPorts":"udp:53, udp:67, tcp:179, tcp:6443", "FelixHostname":"controller-0", "HealthEnabled":"true", "IpInIpEnabled":"true", "IpInIpMtu":"1440", "IpInIpTunnelAddr":"172.16.192.64", "Ipv6Support":"false", "LogFilePath":"None", "LogSeverityFile":"None", "LogSeverityScreen":"info", "LogSeveritySys":"None", "MetadataAddr":"None", "ReportingIntervalSecs":"0"}, Err:error(nil), IptablesNATOutgoingInterfaceFilter:"", SidecarAccelerationEnabled:false, XDPEnabled:true, GenericXDPEnabled:false, loadClientConfigFromEnvironment:(func() (*apiconfig.CalicoAPIConfig, error))(0x11d9c30), useNodeResourceUpdates:false} gitcommit="2020-01-27T19:00:18+0000" version="v3.12.0" puppet/2021-02-10-08-46-01_controller/puppet.log:2021-02-10T08:46:31.147 Notice: 2021-02-10 08:46:29 +0000 Scope(Class[Platform::Kubernetes::Cgroup]): Create /sys/fs/cgroup/[cpuset, cpu, cpuacct, memory, systemd, pids]/k8s-infra puppet/2021-02-10-08-46-01_controller/puppet.log:2021-02-10T08:46:36.581 Debug: 2021-02-10 08:46:35 +0000 Performing a hiera indirector lookup of platform::memcached::params::max_memory with options {:variables=>Scope(Class[Platform::Memcached::Params]), :merge=>#>, @value_type=#]>>>]>, @options={}>} puppet/2021-02-10-08-46-01_controller/puppet.log:2021-02-10T08:46:36.585 Debug: 2021-02-10 08:46:35 +0000 hiera(): Looking up platform::memcached::params::max_memory in YAML backend puppet/2021-02-10-08-46-01_controller/puppet.log:2021-02-10T08:46:36.618 Debug: 2021-02-10 08:46:35 +0000 hiera(): Found platform::memcached::params::max_memory in personality puppet/2021-02-10-08-46-01_controller/puppet.log:2021-02-10T08:46:37.065 Debug: 2021-02-10 08:46:35 +0000 Performing a hiera indirector lookup of memcached::lock_memory with options {:variables=>Scope(Class[Memcached]), :merge=>#>, @value_type=#]>>>]>, @options={}>} puppet/2021-02-10-08-46-01_controller/puppet.log:2021-02-10T08:46:37.068 Debug: 2021-02-10 08:46:35 +0000 hiera(): Looking up memcached::lock_memory in YAML backend puppet/2021-02-10-08-46-01_controller/puppet.log:2021-02-10T08:52:03.807 Notice: 2021-02-10 08:52:03 +0000 /Stage[main]/Platform::Kubernetes::Cgroup/File[/sys/fs/cgroup/memory/k8s-infra]/ensure: created puppet/2021-02-10-08-46-01_controller/puppet.log:2021-02-10T08:52:03.811 Debug: 2021-02-10 08:52:03 +0000 /Stage[main]/Platform::Kubernetes::Cgroup/File[/sys/fs/cgroup/memory/k8s-infra]: The container Class[Platform::Kubernetes::Cgroup] will propagate my refresh event puppet/2021-02-10-08-53-56_worker/puppet.log:2021-02-10T08:54:12.834 Notice: 2021-02-10 08:54:11 +0000 Scope(Class[Platform::Kubernetes::Cgroup]): Create /sys/fs/cgroup/[cpuset, cpu, cpuacct, memory, systemd, pids]/k8s-infra rabbitmq/rabbit at localhost.log.1:Memory limit set to 7157MB of 17893MB total. rabbitmq/rabbit at localhost.log.1:Memory limit set to 7157MB of 17893MB total. sysinv.log:sysinv 2021-02-10 07:09:13.687 74282 INFO sysinv.agent.manager [-] _report_to_conductor initial_reports_required=set(['pci_device', 'pv', 'cpu', 'lvg', 'numa', 'memory', 'disk', 'port']) sysinv.log:sysinv 2021-02-10 07:10:15.315 74282 INFO sysinv.agent.manager [-] _report_to_conductor initial_reports_required=set(['pci_device', 'pv', 'cpu', 'lvg', 'numa', 'memory', 'disk', 'port']) sysinv.log:sysinv 2021-02-10 07:11:32.034 82923 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. sysinv.log:sysinv 2021-02-10 07:11:33.297 82924 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. sysinv.log:sysinv 2021-02-10 07:11:47.115 74282 INFO sysinv.agent.manager [-] _report_to_conductor initial_reports_required=set(['pci_device', 'pv', 'cpu', 'lvg', 'numa', 'memory', 'disk', 'port']) sysinv.log:sysinv 2021-02-10 07:11:48.288 80431 INFO sysinv.conductor.manager [-] Attempting to create new device {u'sriov_numvfs': 0, u'driver': u'virtio-pci', u'prevision': u'0', u'sriov_vf_driver': None, u'sriov_vf_pdevice_id': None, 'host_id': 1, u'psvendor': u'Red Hat, Inc.', u'extra_info': None, u'name': u'pci_0000_04_00_0', u'numa_node': u'-1', u'pdevice_id': u'1002', u'pclass': u'Unclassified device [00ff]', u'pvendor': u'Red Hat, Inc.', u'sriov_vfs_pci_address': u'', u'psdevice': u'Device 0005', u'pciaddr': u'0000:04:00.0', u'pdevice': u'Virtio memory balloon', u'pvendor_id': u'1af4', u'pclass_id': u'00ff00', u'enabled': False, u'sriov_totalvfs': None} on host 1 sysinv.log:sysinv 2021-02-10 08:39:28.959 82923 INFO sysinv.api.controllers.v1.host [-] Memory: Total=17893 MiB, Allocated=4600 MiB, 2M: 1696 pages None pages pending, 1G: 3 pages None pages pending sysinv.log:sysinv 2021-02-10 08:54:02.652 112706 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. sysinv.log:sysinv 2021-02-10 08:54:10.445 101290 INFO sysinv.agent.manager [-] _report_to_conductor initial_reports_required=set(['pci_device', 'pv', 'cpu', 'lvg', 'numa', 'memory', 'disk', 'port']) sysinv.log:sysinv 2021-02-10 08:54:11.920 101290 INFO sysinv.agent.manager [-] _report_to_conductor initial_reports_required=set(['pci_device', 'pv', 'cpu', 'lvg', 'numa', 'memory', 'disk', 'port']) sysinv.log:sysinv 2021-02-10 08:54:13.117 112707 WARNING keystonemiddleware.auth_token [-] Using the in-process token cache is deprecated as of the 4.2.0 release and may be removed in the 5.0.0 release or the 'O' development cycle. The in-process cache causes inconsistent results and high memory usage. When the feature is removed the auth_token middleware will not cache tokens by default which may result in performance issues. It is recommended to use memcache for the auth_token token cache by setting the memcached_servers option. sysinv.log:sysinv 2021-02-10 10:33:29.470 110808 INFO sysinv.conductor.manager [-] Attempting to create new device {u'sriov_numvfs': 0, u'driver': u'virtio-pci', u'prevision': u'0', u'sriov_vf_driver': None, u'sriov_vf_pdevice_id': None, 'host_id': 2, u'psvendor': u'Red Hat, Inc.', u'extra_info': None, u'name': u'pci_0000_04_00_0', u'numa_node': u'-1', u'pdevice_id': u'1002', u'pclass': u'Unclassified device [00ff]', u'pvendor': u'Red Hat, Inc.', u'sriov_vfs_pci_address': u'', u'psdevice': u'Device 0005', u'pciaddr': u'0000:04:00.0', u'pdevice': u'Virtio memory balloon', u'pvendor_id': u'1af4', u'pclass_id': u'00ff00', u'enabled': False, u'sriov_totalvfs': None} on host 2 sysinv.log:sysinv 2021-02-10 16:02:26.857 112706 INFO sysinv.api.controllers.v1.host [-] Memory: Total=17893 MiB, Allocated=4600 MiB, 2M: 1696 pages None pages pending, 1G: 3 pages None pages pending sysinv.log:sysinv 2021-02-10 16:40:35.053 112707 INFO sysinv.api.controllers.v1.host [-] Update host memory for (controller-1) sysinv.log:sysinv 2021-02-10 16:40:35.053 112707 INFO sysinv.conductor.rpcapi [-] ConductorApi.update_host_memory: sending host memory update request to conductor sysinv.log:sysinv 2021-02-10 16:40:46.158 112707 INFO sysinv.api.controllers.v1.host [-] Memory: Total=17893 MiB, Allocated=4600 MiB, 2M: 6646 pages None pages pending, 1G: 12 pages None pages pending From: Ildiko Vancsa Sent: Sunday, February 14, 2021 2:42 PM To: Rai, Ankush Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Alarm "Memory threshold" Hi Ankush, Do you have any log entries on the system you could share here that show the memory readings the alarm might be triggered by? Thanks, Ildikó > On Feb 14, 2021, at 09:26, Rai, Ankush <="" div=""> Caution (External, ildiko.vancsa at gmail.com) Spam Content Details Report This Email FAQ Protection by INKY Hi Ankush, Do you have any log entries on the system you could share here that show the memory readings the alarm might be triggered by? Thanks, Ildikó > On Feb 14, 2021, at 09:26, Rai, Ankush > wrote: > > Hi, > > Below alarm is getting raised for every node of the central and edge cloud. > > “Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%” > > It looks to be the false alarm as nodes are having enough available memory. Please config the root cause of this alarm. > > Thanks, > Ankush > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://secure-web.cisco.com/1PV8HDMvVRW0RqVu-SKTRWk6WWzv0LMknZ5lZPnq_3AC44IJMRoTkCmBzbDzfSRKy1sR51Ro2VO06UwWt-4FdJVml-0HlHOjpHq9J5nmMvsPseu4RcqzNaQvw5haLJsQqt0HLmSNJtbid7y2kxIipLb1hZwBP5gx-ZIdH71ha2sHnU9iy8kbtB51Y5tHpLdGIcUYnJfot6KLcOe6xS2sPGBEOGVhleDZ83q7d7l9kxhO6HkHdmUPURKbTELhCqw8pf-r7fiE8usjs79rMvOl3im3n1pT0kPRXHzHN_DY7SMgFhzOa2MXDTiDCKn-33Ump/http%3A%2F%2Flists.starlingx.io%2Fcgi-bin%2Fmailman%2Flistinfo%2Fstarlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Sun Feb 14 14:24:21 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 14 Feb 2021 15:24:21 +0100 Subject: [Starlingx-discuss] Alarm "Memory threshold" In-Reply-To: References: <1821C2AB-A1F9-46C4-9905-623216984360@gmail.com> Message-ID: Hi, I saw that you have a ‘reserved for platform’ memory entry with the value of 4600 MiB. I’ve found entries below that report platform memory usage having an over 100% value: daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.1, k8s-system: 590.2), k8s-addon: 0.0 So while the overall memory usage in the system isn’t over the threshold I assume the usage of that reserved amount of memory still exceeds it. Have you seen any configuration option to increase the amount of platform memory? You can also look into collectd if that leads you closer to what it is reading to get those values. I don’t have access to a StarlingX install to look into this, so I cannot tell where to look for that. I found some documentation for Kubernetes to set memory limits, but I’m not sure that applies here. (https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) Have you looked into the above already? Thanks, Ildikó > On Feb 14, 2021, at 10:51, Rai, Ankush wrote: > > Not sure exactly which log file to check. Captured some data here, please check if this can help. > > Software Version: 20.06 > Memory: > Reserved for Platform: 4600 MiB > Usable Total: 13293 MiB > Available: 13293 MiB > > The fm has logged these events. > > fm-event.log:2021-02-10T08:57:25.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 80.00%, actual 88.83%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "major", "state" : "set", "timestamp" : "2021-02-10 08:57:25.484131" } > fm-event.log:2021-02-10T09:17:55.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "critical", "state" : "set", "timestamp" : "2021-02-10 09:17:55.482154" } > [snip] > > From: Ildiko Vancsa > Sent: Sunday, February 14, 2021 2:42 PM > To: Rai, Ankush > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Alarm "Memory threshold" > > > Hi Ankush, > > Do you have any log entries on the system you could share here that show the memory readings the alarm might be triggered by? > > Thanks, > Ildikó > > > > On Feb 14, 2021, at 09:26, Rai, Ankush wrote: > > > > Hi, > > > > Below alarm is getting raised for every node of the central and edge cloud. > > > > “Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%” > > > > It looks to be the false alarm as nodes are having enough available memory. Please config the root cause of this alarm. > > > > Thanks, > > Ankush > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://secure-web.cisco.com/1PV8HDMvVRW0RqVu-SKTRWk6WWzv0LMknZ5lZPnq_3AC44IJMRoTkCmBzbDzfSRKy1sR51Ro2VO06UwWt-4FdJVml-0HlHOjpHq9J5nmMvsPseu4RcqzNaQvw5haLJsQqt0HLmSNJtbid7y2kxIipLb1hZwBP5gx-ZIdH71ha2sHnU9iy8kbtB51Y5tHpLdGIcUYnJfot6KLcOe6xS2sPGBEOGVhleDZ83q7d7l9kxhO6HkHdmUPURKbTELhCqw8pf-r7fiE8usjs79rMvOl3im3n1pT0kPRXHzHN_DY7SMgFhzOa2MXDTiDCKn-33Ump/http%3A%2F%2Flists.starlingx.io%2Fcgi-bin%2Fmailman%2Flistinfo%2Fstarlingx-discuss > > From alexandru.dimofte at intel.com Sun Feb 14 21:03:34 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sun, 14 Feb 2021 21:03:34 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210214T023311Z Message-ID: Sanity Test from 2021-February-14 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210214T023311Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210214T023311Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From Sriram.Dharwadkar at commscope.com Mon Feb 15 03:59:00 2021 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Mon, 15 Feb 2021 03:59:00 +0000 Subject: [Starlingx-discuss] StaringX - 4.0 build issue Message-ID: Hi, We are trying to setup Starlingx build environment by following this link, Under Download packages section, when I run this command cd $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools && bash download_mirror.sh I m getting the below error. Any help would be appreciated. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/updates/x86_64/repodata/9aa6c4d73447c5377c6be87fa5dc1da42028fa19830d050572d53ac4b1d8ee53-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/updates/x86_64/repodata/9aa6c4d73447c5377c6be87fa5dc1da42028fa19830d050572d53ac4b1d8ee53-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/os/x86_64/repodata/0bc33a91d6f6e2e8b50494c7be3b810cac548038cc62832e01f967060aa76a23-filelists.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/os/x86_64/repodata/0bc33a91d6f6e2e8b50494c7be3b810cac548038cc62832e01f967060aa76a23-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.2.1511/updates/x86_64/repodata/f2cde4bfb7893be9431c7b6fe326e7b37400c099d3c5fb9d25b6dad95a454ae4-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.2.1511/updates/x86_64/repodata/f2cde4bfb7893be9431c7b6fe326e7b37400c099d3c5fb9d25b6dad95a454ae4-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.2.1511/updates/x86_64/repodata/2a3cb8d136b161dbf3a77be25ee4d7f0f5ee0137ce40730f775a1c95390fb566-filelists.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.2.1511/updates/x86_64/repodata/2a3cb8d136b161dbf3a77be25ee4d7f0f5ee0137ce40730f775a1c95390fb566-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.3.1611/os/x86_64/repodata/c901f2d76fdc6e829759784f7cc72db5e7189b9f0dd17f155c158371b7f1e09e-filelists.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.3.1611/os/x86_64/repodata/c901f2d76fdc6e829759784f7cc72db5e7189b9f0dd17f155c158371b7f1e09e-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/os/x86_64/repodata/35713b41d44af605d346a40ca9f8d25e836dea8ec7e8feed5993d856cab9650f-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/os/x86_64/repodata/35713b41d44af605d346a40ca9f8d25e836dea8ec7e8feed5993d856cab9650f-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/os/x86_64/repodata/da4243d99e231c889ebbc77b6959cd7eb60dd94b63825b006705eada66eb9fe0-filelists.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/os/x86_64/repodata/da4243d99e231c889ebbc77b6959cd7eb60dd94b63825b006705eada66eb9fe0-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.3.1611/updates/x86_64/repodata/eabec8fa6e23e46f54bc072a65cb3d36cae8f5a61dce8d4e7c1ee74f911aa2cd-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.3.1611/updates/x86_64/repodata/eabec8fa6e23e46f54bc072a65cb3d36cae8f5a61dce8d4e7c1ee74f911aa2cd-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/os/x86_64/repodata/8774b62a6c3a7503997fb932bdafa2c4d9a39420e39f6e169e3b095f11bbe8a2-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/os/x86_64/repodata/8774b62a6c3a7503997fb932bdafa2c4d9a39420e39f6e169e3b095f11bbe8a2-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/updates/x86_64/repodata/d5344ec18daf1c28b529e87b26e44f68aac04e44d47162cb2181dca511bfa25a-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/updates/x86_64/repodata/d5344ec18daf1c28b529e87b26e44f68aac04e44d47162cb2181dca511bfa25a-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/epel/dl.fedoraproject.org/pub/epel/testing/7/x86_64/debug/repodata/c024fa4eaa1cf97536872da8b1cf33131b73e41a3a08c684c763ae9d92039a19-filelists.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/epel/dl.fedoraproject.org/pub/epel/testing/7/x86_64/debug/repodata/c024fa4eaa1cf97536872da8b1cf33131b73e41a3a08c684c763ae9d92039a19-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/os/x86_64/repodata/0bc33a91d6f6e2e8b50494c7be3b810cac548038cc62832e01f967060aa76a23-filelists.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/os/x86_64/repodata/0bc33a91d6f6e2e8b50494c7be3b810cac548038cc62832e01f967060aa76a23-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/updates/x86_64/repodata/ef2823e5451170fcfe7ea04b89f78f42d816df8469e2acd1da7cbe5d3a950a1e-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/updates/x86_64/repodata/ef2823e5451170fcfe7ea04b89f78f42d816df8469e2acd1da7cbe5d3a950a1e-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/updates/x86_64/repodata/9aa6c4d73447c5377c6be87fa5dc1da42028fa19830d050572d53ac4b1d8ee53-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/updates/x86_64/repodata/9aa6c4d73447c5377c6be87fa5dc1da42028fa19830d050572d53ac4b1d8ee53-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/updates/x86_64/repodata/ec427d9c8fc0bfe724f9012833182dc59568017a04b1a71599a0c4b4f909a9c0-filelists.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/updates/x86_64/repodata/ec427d9c8fc0bfe724f9012833182dc59568017a04b1a71599a0c4b4f909a9c0-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/os/x86_64/repodata/4e8d3a1931f819fc05181a8dfc13466424afc8604727b85aa5f4b3c82e7db36b-filelists.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/os/x86_64/repodata/4e8d3a1931f819fc05181a8dfc13466424afc8604727b85aa5f4b3c82e7db36b-filelists.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/os/x86_64/repodata/8774b62a6c3a7503997fb932bdafa2c4d9a39420e39f6e169e3b095f11bbe8a2-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/os/x86_64/repodata/8774b62a6c3a7503997fb932bdafa2c4d9a39420e39f6e169e3b095f11bbe8a2-primary.sqlite.bz2: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') Trying other mirror. Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sriram.Dharwadkar at commscope.com Mon Feb 15 04:03:48 2021 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Mon, 15 Feb 2021 04:03:48 +0000 Subject: [Starlingx-discuss] Upgrade MLNX-OFED to 5.2-2.2.0.0 in StarlingX-4.0 to latest In-Reply-To: References: Message-ID: Does StarlingX support of MLNX-OFED packages ? Please let me know if there is a procedure to upgrade OFED package. Regards, Sriram From: Dharwadkar, Sriram Sent: Tuesday, February 9, 2021 11:48 PM To: starlingx-discuss at lists.starlingx.io Subject: Upgrade MLNX-OFED to 5.2-2.2.0.0 in StarlingX-4.0 to latest Hi, I have installed distributed StarlingX 4.0. We are facing one issue wrt MLNX-OFED version. In the ConnectX-4 EN Nic that we are using in our platform, we see some issue related to spoof check parameter. In Kubernetes environment, after pod restart, if pod attaches to the same VF that it was using previously, spoof check becomes ON automatically and the traffic stops going out of that VF. To solve that issue, our hardware vendor has suggested to the upgrade of MLNX-OFED(5.2-2.2.0.0) and Firmware upragde(MFT 4.16.1). In starlingX environment, I tried doing # ./install.sh --oem -E- There are missing packages that are required for installation of MFT. -I- You can install missing packages using: yum install gcc rpm-build kernel-devel-4.18.0-147.3.1.rt24.96.el8_1.tis.8.x86_64 I could install gcc and kernel-devel-4.18.0-147.3.1.rt24.96.el8_1.tis.8.x86_64, but rpm-build installation is not going through because of some dependency. How do we go about upgrading these packages ? Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From sinlam at hotmail.com Mon Feb 15 09:58:06 2021 From: sinlam at hotmail.com (Sin Lam Tan) Date: Mon, 15 Feb 2021 09:58:06 +0000 Subject: [Starlingx-discuss] Starlingx integration with Open Source Mano (OSM) In-Reply-To: References: , Message-ID: Hi Austin, Thanks for your suggestion. I use my own DNS server dnsmasq and use domain name instead of IP address, it works fine with OSM. I face another problem. In the Starlingx Openstack, I cannot create an internal network using "openstack network create mgmtnet", the error message is "Error while executing command: HttpException: 503, Unable to create the network. No tenant network is available for allocation." In Starlingx Openstack, can I create an internal network without associating with any physical network card? In OSM Network Service templates, it will create some internal networks automatically and it fails during the creation. Hope to hear some suggestion from this group! Thank you! Regards, Sin Lam ________________________________ From: Sun, Austin Sent: Monday, January 18, 2021 9:11 AM To: Sin Lam Tan ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Starlingx integration with Open Source Mano (OSM) Hi Sin: Which stx version do you use ? I don’t know OSM, but the openstack auth_url should be issue. In stx 3.0 and later , the openstack keystone services endpoint should be http://keystone.openstack.svc.cluster.local/v3 , http://10.10.10.3:5000/v3 is for host services , not for openstack services. Hope this info is useful for you . you can refer [1] for more openstack on stx more info [1] https://docs.starlingx.io/deploy_install_guides/r3_release/openstack/access.html#configure-helm-endpoint-domain Thanks. BR Austin Sun. From: Sin Lam Tan Sent: Saturday, January 16, 2021 9:28 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Starlingx integration with Open Source Mano (OSM) Hi, May I know anyone has done the integration of Starlingx with OSM? Currently I install Starlingx inside a VM (IP: 10.10.10.3), and install OSM in another VM (IP: 10.10.10.4). Both VM are connected using management network 10.10.10.x. In the Starlingx, I install Openstack using helm-chart and I am able to access Openstack Dashboard from http://10.10.10.3:31000/ In the OSM, I link Openstack to OSM as a VIM using the command in OSM: osm vim-create --name openstack-starlingx --user admin --password St8rlingX* --auth_url http://10.10.10.3:5000/v3 --tenant admin --account_type openstack --config='{security_groups: default, availability_zone: nova, keypair: test, insecure: true}' I would like to use OSM to deploy VM using OSM template. Currently I use the sample hackfest_basic_ns & hackfest_basic_vnf for testing. During "Instantiate NS", I got an error as below: Operation: INSTANTIATING.f0055db9-fc88-460e-a012-dcf66da7b951, Stage 2/5: deployment of KDUs, VMs and execution environments. Detail: Deploying at VIM: VIM Exception EndpointNotFound: public endpoint for image service not found. Rollback successful. I don't know how to fix this issue, any suggestion is appreciated. Thank you! Regards, Sinlam -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Feb 15 14:31:41 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 15 Feb 2021 15:31:41 +0100 Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? Message-ID: <702790BB-8188-4A07-B84D-FCBDBB84262B@gmail.com> Hi, I recognized that the meeting wiki still contains the Packet SIG calls for Tuesdays at 10am PST. Are those calls still happening or is there any other StarlingX team call at that time? I’m asking both to keep the wiki up to date as well as to see if the Zoom account is available at that time. Thanks, Ildikó From build.starlingx at gmail.com Tue Feb 16 00:33:19 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 15 Feb 2021 19:33:19 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1548 - Failure! Message-ID: <1202937274.143.1613435600603.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1548 Status: Failure Timestamp: 20210216T001611Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210216T000116Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210216T000116Z DOCKER_BUILD_ID: jenkins-master-distro-20210216T000116Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210216T000116Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210216T000116Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/distro From Saul.Wold at windriver.com Tue Feb 16 03:39:30 2021 From: Saul.Wold at windriver.com (Saul Wold) Date: Mon, 15 Feb 2021 19:39:30 -0800 Subject: [Starlingx-discuss] No Multi-OS Meeting this week / Distro Team will meet on Wednesday Message-ID: <7f69b5b5-f756-6769-9cb8-b2a1ba665ccf@windriver.com> Nothing new to report yet. We will have a Distro Team Meeting Wednesday. -- Sau! From openinfradn at gmail.com Tue Feb 16 03:54:34 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 16 Feb 2021 09:24:34 +0530 Subject: [Starlingx-discuss] Moving from Simplex to Distribute Message-ID: Hi, I have deployed StarlingX AIO Simplex with the idea of moving to Duplex and Distributed model step by step. Is it possible to move forward 'without a fresh installation' of AIO Duplex and Distributed? Regards Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Tue Feb 16 11:37:09 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 16 Feb 2021 11:37:09 +0000 Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? In-Reply-To: <702790BB-8188-4A07-B84D-FCBDBB84262B@gmail.com> References: <702790BB-8188-4A07-B84D-FCBDBB84262B@gmail.com> Message-ID: I am not aware of this meeting still happening. Greg. From: Ildiko Vancsa Date: Monday, February 15, 2021 at 9:34 AM To: StarlingX ML Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I recognized that the meeting wiki still contains the Packet SIG calls for Tuesdays at 10am PST. Are those calls still happening or is there any other StarlingX team call at that time? I’m asking both to keep the wiki up to date as well as to see if the Zoom account is available at that time. Thanks, Ildikó _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Feb 16 13:56:54 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 16 Feb 2021 14:56:54 +0100 Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? In-Reply-To: References: <702790BB-8188-4A07-B84D-FCBDBB84262B@gmail.com> Message-ID: Hi Greg, Cool, thanks for confirming. BTW, are you or anyone from the project still using the Packet testbed to put together demos, etc? Thanks, Ildikó > On Feb 16, 2021, at 12:37, Waines, Greg wrote: > > I am not aware of this meeting still happening. > Greg. > > From: Ildiko Vancsa > Date: Monday, February 15, 2021 at 9:34 AM > To: StarlingX ML > Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > I recognized that the meeting wiki still contains the Packet SIG calls for Tuesdays at 10am PST. Are those calls still happening or is there any other StarlingX team call at that time? > > I’m asking both to keep the wiki up to date as well as to see if the Zoom account is available at that time. > > Thanks, > Ildikó > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Greg.Waines at windriver.com Tue Feb 16 15:19:36 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 16 Feb 2021 15:19:36 +0000 Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? In-Reply-To: References: <702790BB-8188-4A07-B84D-FCBDBB84262B@gmail.com> Message-ID: We do still use Packet testbed for some starlingx demos. Greg. From: Ildiko Vancsa Date: Tuesday, February 16, 2021 at 8:57 AM To: Greg Waines Cc: StarlingX ML Subject: Re: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Greg, Cool, thanks for confirming. BTW, are you or anyone from the project still using the Packet testbed to put together demos, etc? Thanks, Ildikó On Feb 16, 2021, at 12:37, Waines, Greg > wrote: I am not aware of this meeting still happening. Greg. From: Ildiko Vancsa > Date: Monday, February 15, 2021 at 9:34 AM To: StarlingX ML > Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I recognized that the meeting wiki still contains the Packet SIG calls for Tuesdays at 10am PST. Are those calls still happening or is there any other StarlingX team call at that time? I’m asking both to keep the wiki up to date as well as to see if the Zoom account is available at that time. Thanks, Ildikó _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Feb 16 15:32:21 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 16 Feb 2021 16:32:21 +0100 Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? In-Reply-To: References: <702790BB-8188-4A07-B84D-FCBDBB84262B@gmail.com> Message-ID: <976BC1C5-CA1A-4AF9-9F76-936943AB7174@gmail.com> Cool, I’m happy that you confirmed that. Is there any demo activity running on that hardware that could be documented as a blog post maybe even with a demo video (screen cap or smth)? It would be great to get some new content for the blog, so I was wondering if this might be a good source. Thanks, Ildikó > On Feb 16, 2021, at 16:19, Waines, Greg wrote: > > We do still use Packet testbed for some starlingx demos. > Greg. > > From: Ildiko Vancsa > Date: Tuesday, February 16, 2021 at 8:57 AM > To: Greg Waines > Cc: StarlingX ML > Subject: Re: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi Greg, > > Cool, thanks for confirming. > > BTW, are you or anyone from the project still using the Packet testbed to put together demos, etc? > > Thanks, > Ildikó > > >> On Feb 16, 2021, at 12:37, Waines, Greg wrote: >> >> I am not aware of this meeting still happening. >> Greg. >> >> From: Ildiko Vancsa >> Date: Monday, February 15, 2021 at 9:34 AM >> To: StarlingX ML >> Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? >> >> [Please note: This e-mail is from an EXTERNAL e-mail address] >> >> Hi, >> >> I recognized that the meeting wiki still contains the Packet SIG calls for Tuesdays at 10am PST. Are those calls still happening or is there any other StarlingX team call at that time? >> >> I’m asking both to keep the wiki up to date as well as to see if the Zoom account is available at that time. >> >> Thanks, >> Ildikó >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Tue Feb 16 15:37:47 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 16 Feb 2021 16:37:47 +0100 Subject: [Starlingx-discuss] StarlingX hands-on workshop materials as getting started guide In-Reply-To: <944DA0B9-9B69-45E1-8E7A-8728A6666D1A@gmail.com> References: <944DA0B9-9B69-45E1-8E7A-8728A6666D1A@gmail.com> Message-ID: Hi, Just checking in again if anyone has any updates on the topic? Thanks, Ildikó > On Feb 9, 2021, at 16:05, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > I know that we haven’t had the chance to run the hands-on workshop for a while now due to the changes in how events are held, but I think it would be good to dig up the materials that we had to utilize them. > > StarlingX can get very complex, especially if someone is not familiar with all the components that the project integrates on top of the services the community is actively designing and developing. > > I think it would be a good exercise to look into the materials that we had to maybe turn them into a getting started guide in terms of how to explore the key features of the project. What do people think? > > To get started, does any have the pointers to the exercises and any documentation we had for the training or have the materials saved somewhere to share? > > Thanks, > Ildikó > > From ildiko.vancsa at gmail.com Tue Feb 16 17:34:56 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 16 Feb 2021 18:34:56 +0100 Subject: [Starlingx-discuss] Community marketing call is cancelled tomorrow Message-ID: <7253DE15-B170-4C0F-B6CC-EFF768EC2B48@gmail.com> Hi, As a few of us have conflicts at the time of the Community Marketing Call tomorrow we will cancel this week’s meeting and reconvene on the next occasion in two weeks. If you have any topics to discuss or questions please drop it in as a reply to this mail. Thanks, Ildikó From alexandru.dimofte at intel.com Tue Feb 16 20:10:09 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 16 Feb 2021 20:10:09 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210216T003258Z Message-ID: Sanity Test from 2021-February-16 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210216T003258Z/outputs/iso/ ) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210216T003258Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz I set the status on RED, because, during the Daily Sanity execution, we were unable to install STANDARD configuration on baremetal. See: https://bugs.launchpad.net/starlingx/+bug/1915864 Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From Ghada.Khalil at windriver.com Wed Feb 17 00:36:31 2021 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 17 Feb 2021 00:36:31 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210216T003258Z In-Reply-To: References: Message-ID: I checked the CHANGELOGs between the last green sanity ( http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210214T023311Z/outputs/iso/ ) and the red sanity below. Based on the changelogs, the additional commits are: ./stx-tools 07b8d07a36942e88b45bc5fa7c95cfa76af08463 2021-02-15 15:06:36 +0000 Gerrit Code Review review at openstack.org Merge "nspr/nss/nss-softokn/nss-util: CVE-2018-12404 and CVE-2019-11745" ./stx-tools 6ed078685c413b7199cd18bdde6cb8ad33fdb711 2021-01-28 21:36:35 -0500 Zhixiong Chi zhixiong.chi at windriver.com nspr/nss/nss-softokn/nss-util: CVE-2018-12404 and CVE-2019-11745 See: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210216T003258Z/outputs/CHANGELOG.txt @Chi, Zhixiong please investigate and confirm if this is related to your commits. Regards, Ghada From: Dimofte, Alexandru Sent: Tuesday, February 16, 2021 3:10 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210216T003258Z [Please note: This e-mail is from an EXTERNAL e-mail address] Sanity Test from 2021-February-16 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210216T003258Z/outputs/iso/ ) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210216T003258Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz I set the status on RED, because, during the Daily Sanity execution, we were unable to install STANDARD configuration on baremetal. See: https://bugs.launchpad.net/starlingx/+bug/1915864 Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 5220 bytes Desc: image002.png URL: From Ghada.Khalil at windriver.com Wed Feb 17 02:00:10 2021 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 17 Feb 2021 02:00:10 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting -Feb 10/2020 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases stx.5.0 - Milestone-3 Progress - Feature Status - No concerns; code merges are progressing well - https://docs.google.com/spreadsheets/d/1JbOQELqXG_GDoP1jo6YoRDytpTkJEs89TVimzvrqc_A/edit#gid=1107209846 - Feature Test - No concerns raised by test team - https://docs.google.com/spreadsheets/d/1jro4ItdSobB0-fseK_4cOZJHhUyuh_eCBBsDOp_A5Pc/edit#gid=968103774 - Regression Test - Planning to start Feb 22 - Expected duration is 6wks up to March 29 - Documentation - No concerns; received updates for features - Ceph rook and Edge Woker in progress From pvmpublic at gmail.com Wed Feb 17 06:33:17 2021 From: pvmpublic at gmail.com (Pratik M.) Date: Wed, 17 Feb 2021 12:03:17 +0530 Subject: [Starlingx-discuss] Questions on StarlingX upgrades Message-ID: Hi, Looking at the URL below, I had a few questions to understand the overall upgrade strategy. https://docs.starlingx.io/specs/specs/stx-4.0/approved/starlingx-2007403-platform-upgrades.html Q1: Can I assume that from R4.0 onwards, the intent is that all future releases will support seamless upgrades for both minor and major versions. Q2: Is downgrade an explicit non-goal? Q3: Any hooks for non-stx applications to backup and restore their state? Q4: For a simplex edge node, it involves a N+1 ISO (re)installation. Is a more seamless option on the roadmap Q5. For a simplex edge subcloud, possible to orchestrate this from the central cloud? Q6: Any non-goals for the seamless upgrade, like say CentOS upgrade?. Any pointers to additional info are very welcome. Thanks in advance Pratik -------------- next part -------------- An HTML attachment was scrubbed... URL: From haridhar.kalvala at intel.com Wed Feb 17 10:13:26 2021 From: haridhar.kalvala at intel.com (Kalvala, Haridhar) Date: Wed, 17 Feb 2021 10:13:26 +0000 Subject: [Starlingx-discuss] Issue applying helm chart in horizon container with custom dashboard changes. Message-ID: Hello All, We are trying to have Fault dashboard in openstack horizon container (partial patch here). Facing below error @ _stylesheets.html , while applying stx-openstack helmchart. Browsed for this error and found following relevant link (https://bugs.launchpad.net/horizon/+bug/1585606) , followed the suggestion in launchpad. However issue still persists. Any suggestion/inputs here will help. Regards, Haridhar Kalvala Error Message: + /tmp/manage.py compress --force /var/lib/openstack/lib64/python3.6/site-packages/scss/namespace.py:172: DeprecationWarning: inspect.getargspec() is deprecated since Python 3.0, use inspect.signature() or inspect.getfullargspec() argspec = inspect.getargspec(function) /var/lib/openstack/lib/python3.6/site-packages/django/contrib/staticfiles/templatetags/staticfiles.py:26: RemovedInDjango30Warning: {% load staticfiles %} is deprecated in favor of {% load static %}. RemovedInDjango30Warning, /var/lib/openstack/lib/python3.6/site-packages/memcache.py:132: DeprecationWarning: invalid escape sequence \ """ CommandError: An error occurred during rendering /var/lib/openstack/lib/python3.6/site-packages/openstack_dashboard/templates/_stylesheets.html: Couldn't find anything to import: /dashboard/fault_management/fault_management.scss Extensions: , , Search path: on line 1 of 'string:354662134010626d:\n // My Themes\n at import "/themes/default/variables";\n\n// Horizon\n at import "/dashboard/scss/horizon.' -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Feb 17 12:41:25 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 17 Feb 2021 12:41:25 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Feb 17, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community calls coming up later today. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210217T1500 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Wed Feb 17 14:11:32 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 17 Feb 2021 14:11:32 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210217T023344Z Message-ID: Sanity Test from 2021-February-17 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210217T023344Z/outputs/iso/ ) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210217T023344Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz I set the status on RED, because, during the Daily Sanity execution, we were unable to install STANDARD configuration on baremetal. See: https://bugs.launchpad.net/starlingx/+bug/1915864 Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From Saul.Wold at windriver.com Wed Feb 17 14:15:27 2021 From: Saul.Wold at windriver.com (Saul Wold) Date: Wed, 17 Feb 2021 06:15:27 -0800 Subject: [Starlingx-discuss] Distro Team Meeting Notes 2/17 Message-ID: Folks,, We had a very very brief meeting today Bart asked about the status of CentOS-8 update work, which has been stopped, but he would like to use some of that work for the Python3 changes. I would like to float the idea of merging the Distro Team meeting and Multi-OS meeting into 1 meeting bi-weekly. I would prefer the Tuesday 7:30, we are about to "spring forward" in mid-March, so for China it would be 2230. I will wait to hear from large communities before making a final choice. -- Sau! From yatindra.shashi at intel.com Wed Feb 17 14:57:59 2021 From: yatindra.shashi at intel.com (Shashi, Yatindra) Date: Wed, 17 Feb 2021 14:57:59 +0000 Subject: [Starlingx-discuss] failed to run Kubelet: invalid configuration: cgroup-root ["k8s-infra"] doesn't exist Message-ID: Hi Team, I am trying to join the Ubuntu (18.04) machine to the STX 4.0 K8s cluster using kubeadm join, but fails because kubelet is not running properly. I installed kubeadm and kubelet node binaries of version 1.18.12. I checked the bug report and run the script (kubelet-cgroup-setup.sh) , still I get the issue. [ https://bugs.launchpad.net/starlingx/+bug/1828270] Do anybody can suggest what could be reason for it. Journalctl -f : Feb 17 15:48:33 cont0-NUC8i7HVK systemd[1]: Started Kubernetes systemd probe. Feb 17 15:48:33 cont0-NUC8i7HVK kubelet[1259]: I0217 15:48:33.178057 1259 server.go:417] Version: v1.18.12 Feb 17 15:48:33 cont0-NUC8i7HVK kubelet[1259]: I0217 15:48:33.178299 1259 plugins.go:100] No cloud provider specified. Feb 17 15:48:33 cont0-NUC8i7HVK kubelet[1259]: I0217 15:48:33.178318 1259 server.go:838] Client rotation is on, will bootstrap in background Feb 17 15:48:33 cont0-NUC8i7HVK kubelet[1259]: I0217 15:48:33.186043 1259 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 17 15:48:33 cont0-NUC8i7HVK kubelet[1259]: I0217 15:48:33.187503 1259 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt Feb 17 15:48:33 cont0-NUC8i7HVK kubelet[1259]: F0217 15:48:33.212500 1259 server.go:274] failed to run Kubelet: invalid configuration: cgroup-root ["k8s-infra"] doesn't exist Feb 17 15:48:33 cont0-NUC8i7HVK systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Feb 17 15:48:33 cont0-NUC8i7HVK systemd[1]: kubelet.service: Failed with result 'exit-code'. -- cont0 at cont0-NUC8i7HVK:~$ sudo find /sys/fs/cgroup/ -name k8s-infra [sudo] password for cont0: Sorry, try again. [sudo] password for cont0: Sorry, try again. [sudo] password for cont0: /sys/fs/cgroup/net_cls,net_prio/k8s-infra /sys/fs/cgroup/blkio/k8s-infra /sys/fs/cgroup/cpuset/k8s-infra /sys/fs/cgroup/perf_event/k8s-infra /sys/fs/cgroup/memory/k8s-infra /sys/fs/cgroup/pids/k8s-infra /sys/fs/cgroup/cpu,cpuacct/k8s-infra /sys/fs/cgroup/freezer/k8s-infra /sys/fs/cgroup/hugetlb/k8s-infra /sys/fs/cgroup/devices/k8s-infra /sys/fs/cgroup/systemd/k8s-infra Thanks in advance for your help. Mit freundlichen Grüßen/ with best regards, Yatindra Shashi IoTG DE- Intel Corporation Munich, Germany Intel Deutschland GmbH Registered Address: Am Campeon 10, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Sharon Heck, Tiffany Doon Silva Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Feb 17 15:16:38 2021 From: scott.little at windriver.com (Scott Little) Date: Wed, 17 Feb 2021 10:16:38 -0500 Subject: [Starlingx-discuss] StaringX - 4.0 build issue In-Reply-To: References: Message-ID: <9f819724-4791-c747-8333-048f368cb39f@windriver.com> I can't reproduce your issue.  The links are valid, and download quickly for me. Are you going through a proxy ? Perhaps there was a transient netwoeking issue.  Can you retry now? Scott On 2021-02-14 10:59 p.m., Dharwadkar, Sriram wrote: > > **[Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > We are trying to setup Starlingx build environment by following this link, > > Under Download packages section, when I run this command > > cd $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools && bash > download_mirror.sh > > I m getting the below error. Any help would be appreciated. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/updates/x86_64/repodata/9aa6c4d73447c5377c6be87fa5dc1da42028fa19830d050572d53ac4b1d8ee53-primary.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/updates/x86_64/repodata/9aa6c4d73447c5377c6be87fa5dc1da42028fa19830d050572d53ac4b1d8ee53-primary.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/os/x86_64/repodata/0bc33a91d6f6e2e8b50494c7be3b810cac548038cc62832e01f967060aa76a23-filelists.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/os/x86_64/repodata/0bc33a91d6f6e2e8b50494c7be3b810cac548038cc62832e01f967060aa76a23-filelists.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.2.1511/updates/x86_64/repodata/f2cde4bfb7893be9431c7b6fe326e7b37400c099d3c5fb9d25b6dad95a454ae4-primary.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.2.1511/updates/x86_64/repodata/f2cde4bfb7893be9431c7b6fe326e7b37400c099d3c5fb9d25b6dad95a454ae4-primary.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.2.1511/updates/x86_64/repodata/2a3cb8d136b161dbf3a77be25ee4d7f0f5ee0137ce40730f775a1c95390fb566-filelists.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.2.1511/updates/x86_64/repodata/2a3cb8d136b161dbf3a77be25ee4d7f0f5ee0137ce40730f775a1c95390fb566-filelists.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.3.1611/os/x86_64/repodata/c901f2d76fdc6e829759784f7cc72db5e7189b9f0dd17f155c158371b7f1e09e-filelists.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.3.1611/os/x86_64/repodata/c901f2d76fdc6e829759784f7cc72db5e7189b9f0dd17f155c158371b7f1e09e-filelists.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/os/x86_64/repodata/35713b41d44af605d346a40ca9f8d25e836dea8ec7e8feed5993d856cab9650f-primary.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/os/x86_64/repodata/35713b41d44af605d346a40ca9f8d25e836dea8ec7e8feed5993d856cab9650f-primary.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/os/x86_64/repodata/da4243d99e231c889ebbc77b6959cd7eb60dd94b63825b006705eada66eb9fe0-filelists.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/os/x86_64/repodata/da4243d99e231c889ebbc77b6959cd7eb60dd94b63825b006705eada66eb9fe0-filelists.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.3.1611/updates/x86_64/repodata/eabec8fa6e23e46f54bc072a65cb3d36cae8f5a61dce8d4e7c1ee74f911aa2cd-primary.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.3.1611/updates/x86_64/repodata/eabec8fa6e23e46f54bc072a65cb3d36cae8f5a61dce8d4e7c1ee74f911aa2cd-primary.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/os/x86_64/repodata/8774b62a6c3a7503997fb932bdafa2c4d9a39420e39f6e169e3b095f11bbe8a2-primary.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/os/x86_64/repodata/8774b62a6c3a7503997fb932bdafa2c4d9a39420e39f6e169e3b095f11bbe8a2-primary.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/updates/x86_64/repodata/d5344ec18daf1c28b529e87b26e44f68aac04e44d47162cb2181dca511bfa25a-primary.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/7.5.1804/updates/x86_64/repodata/d5344ec18daf1c28b529e87b26e44f68aac04e44d47162cb2181dca511bfa25a-primary.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/epel/dl.fedoraproject.org/pub/epel/testing/7/x86_64/debug/repodata/c024fa4eaa1cf97536872da8b1cf33131b73e41a3a08c684c763ae9d92039a19-filelists.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/epel/dl.fedoraproject.org/pub/epel/testing/7/x86_64/debug/repodata/c024fa4eaa1cf97536872da8b1cf33131b73e41a3a08c684c763ae9d92039a19-filelists.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/os/x86_64/repodata/0bc33a91d6f6e2e8b50494c7be3b810cac548038cc62832e01f967060aa76a23-filelists.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/os/x86_64/repodata/0bc33a91d6f6e2e8b50494c7be3b810cac548038cc62832e01f967060aa76a23-filelists.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/updates/x86_64/repodata/ef2823e5451170fcfe7ea04b89f78f42d816df8469e2acd1da7cbe5d3a950a1e-primary.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/updates/x86_64/repodata/ef2823e5451170fcfe7ea04b89f78f42d816df8469e2acd1da7cbe5d3a950a1e-primary.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/updates/x86_64/repodata/9aa6c4d73447c5377c6be87fa5dc1da42028fa19830d050572d53ac4b1d8ee53-primary.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/updates/x86_64/repodata/9aa6c4d73447c5377c6be87fa5dc1da42028fa19830d050572d53ac4b1d8ee53-primary.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/updates/x86_64/repodata/ec427d9c8fc0bfe724f9012833182dc59568017a04b1a71599a0c4b4f909a9c0-filelists.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/updates/x86_64/repodata/ec427d9c8fc0bfe724f9012833182dc59568017a04b1a71599a0c4b4f909a9c0-filelists.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/os/x86_64/repodata/4e8d3a1931f819fc05181a8dfc13466424afc8604727b85aa5f4b3c82e7db36b-filelists.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/vault.centos.org/centos/7.6.1810/os/x86_64/repodata/4e8d3a1931f819fc05181a8dfc13466424afc8604727b85aa5f4b3c82e7db36b-filelists.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/os/x86_64/repodata/8774b62a6c3a7503997fb932bdafa2c4d9a39420e39f6e169e3b095f11bbe8a2-primary.sqlite.bz2: > [Errno 12] Timeout on > http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.7.1908/os/x86_64/repodata/8774b62a6c3a7503997fb932bdafa2c4d9a39420e39f6e169e3b095f11bbe8a2-primary.sqlite.bz2: > (28, 'Operation too slow. Less than 1000 bytes/sec transferred the > last 30 seconds') > > Trying other mirror. > > Regards, > > Sriram > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Feb 17 15:43:36 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 17 Feb 2021 15:43:36 +0000 Subject: [Starlingx-discuss] Moving from Simplex to Distribute In-Reply-To: References: Message-ID: Hi Danishka – it’s not possible to move from Simplex to Duplex without a re-install. Bill… From: open infra Sent: Monday, February 15, 2021 10:55 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Moving from Simplex to Distribute [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I have deployed StarlingX AIO Simplex with the idea of moving to Duplex and Distributed model step by step. Is it possible to move forward 'without a fresh installation' of AIO Duplex and Distributed? Regards Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Wed Feb 17 15:47:15 2021 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Wed, 17 Feb 2021 15:47:15 +0000 Subject: [Starlingx-discuss] Questions on StarlingX upgrades In-Reply-To: References: Message-ID: Pratik – see my responses to your questions below. Bart From: Pratik M. Sent: Wednesday, February 17, 2021 1:33 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Questions on StarlingX upgrades [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, Looking at the URL below, I had a few questions to understand the overall upgrade strategy. https://docs.starlingx.io/specs/specs/stx-4.0/approved/starlingx-2007403-platform-upgrades.html Q1: Can I assume that from R4.0 onwards, the intent is that all future releases will support seamless upgrades for both minor and major versions. [Bart] You can’t assume this because: 1. Upgrades are not supported for minor versions – moving from one minor version to the next would be done by building patches for the updated packages and applying them. The starlingx community does not provide patches – you would need to do this on your own (or use a commercial distribution of starlingx that supports patching). 2. Upgrades between major versions are supported by the infrastructure described in the specification you referenced above. However, the starlingx community does not test or support upgrades. To do an upgrade, you would need to do your own upgrade testing and fix any issues you uncover. Upgrades often require additional changes on the “from” release side, so you would likely need to build your own patches as well. Q2: Is downgrade an explicit non-goal? [Bart] Correct – starlingx does not support downgrades. Q3: Any hooks for non-stx applications to backup and restore their state? [Bart] No Q4: For a simplex edge node, it involves a N+1 ISO (re)installation. Is a more seamless option on the roadmap [Bart] No Q5. For a simplex edge subcloud, possible to orchestrate this from the central cloud? [Bart] Similar to upgrades, there is an infrastructure for this (i.e. dcmanager upgrade-strategy commands). These commands haven’t been included in the starlingx documentation yet and similar to upgrades, the starlingx community does not test or support these commands. Q6: Any non-goals for the seamless upgrade, like say CentOS upgrade?. [Bart] There has been no decision yet on whether starlingx will be moving to a new version of CentOS or to a different Linux distribution so I can’t comment on this yet. Any pointers to additional info are very welcome. Thanks in advance Pratik -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Feb 17 15:47:35 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 17 Feb 2021 15:47:35 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Feb 17, 2021) In-Reply-To: References: Message-ID: >From today's call: * Standing Topics * Sanity * currently red just for Standard Config on bare metal * suspect commit identified: http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010842.html * we agreed we can wait a day for the response to the mail above since this only impacts Standard Config on bare metal * Gerrit Reviews in Need of Attention * nothing this week * Topics for this Week * nothing this week * ARs from Previous Meetings * Bruce will check in next week on the RHEL Dup action * Open Requests for Help * Sanity-Test failed [sanath kumar] * Hello Team, as discussed in the previous meet, I have checked with all the documents and made changes accordingly. * I request you to open this link and have look at the document https://drive.google.com/file/d/1vHx-GZ5mxAbnAiegK8jl7GqnjtokmaWf/view?usp=sharing * Please let me know what can be done next? * per Nic, the 2 failing testcases are issues with the tests, so they can be ignored * Sanath asked if the Degraded state will impact his deployment * per Frank, it depends - check logs & alarms to see the specific reason for Degraded state * [HSC] (Lokendra) * Openstack APPLICATION UPLOAD GETTING FAILED - with LAG Data Interface * https://bugs.launchpad.net/starlingx/+bug/1915231 * not screened yet, should be done this week * how to enable Debug logs in /var/log/sysinv.log file * per Bart, should be able to do this by editing the /etc/sysinv/sysinv.config file & restart the service * Starlingx integration with Open Source Mano (OSM) (Sin Lam Tan) * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010830.html * Austin has responded * StarlingX - 4.0 build issue (Sriram) * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010828.html * Scott responded re: possible proxy/network issues (Scott has no issues reaching) * Moving from Simplex to Distribute (Danishka) * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010834.html * per Bart the answer is no, Bill will respond * Questions on StarlingX upgrades (Pratik) * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010844.html * Bart will review & respond * Issue applying helm chart in horizon container with custom dashboard changes (Haridhar) * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010845.html * no comments * Build Matters (if required) * nothing to discuss this week, per Scott there were 6 of 7 clean builds * one build saw issues related to our mirror of the Kata repo * (may have been a side effect of the issues we had a couple of weeks ago) --- From: Zvonar, Bill Sent: Wednesday, February 17, 2021 7:41 AM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (Feb 17, 2021) Hi all, reminder of the weekly TSC/Community calls coming up later today. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210217T1500 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From lists at optimcloud.com Wed Feb 17 15:50:52 2021 From: lists at optimcloud.com (lists at optimcloud.com) Date: Wed, 17 Feb 2021 22:50:52 +0700 Subject: [Starlingx-discuss] Moving from Simplex to Distribute In-Reply-To: References: Message-ID: On 2021-02-17 22:43, Zvonar, Bill wrote: > Hi Danishka – it’s not possible to move from Simplex to Duplex > without a re-install. can you elaborate on the reason this is the case, im also faced with best deployment method that will scale > > Bill… > > From: open infra > Sent: Monday, February 15, 2021 10:55 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Moving from Simplex to Distribute > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > I have deployed StarlingX AIO Simplex with the idea of moving to > Duplex and Distributed model step by step. Is it possible to move > forward 'without a fresh installation' of AIO Duplex and Distributed? > > Regards > > Danishka > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Wed Feb 17 15:57:20 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 17 Feb 2021 15:57:20 +0000 Subject: [Starlingx-discuss] Moving from Simplex to Distribute In-Reply-To: References: Message-ID: Hi Danishka - it's simply that it's not a capability that has been implemented at this time. -----Original Message----- From: lists at optimcloud.com Sent: Wednesday, February 17, 2021 10:51 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Moving from Simplex to Distribute [Please note: This e-mail is from an EXTERNAL e-mail address] On 2021-02-17 22:43, Zvonar, Bill wrote: > Hi Danishka – it’s not possible to move from Simplex to Duplex without > a re-install. can you elaborate on the reason this is the case, im also faced with best deployment method that will scale > > Bill… > > From: open infra > Sent: Monday, February 15, 2021 10:55 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Moving from Simplex to Distribute > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > I have deployed StarlingX AIO Simplex with the idea of moving to > Duplex and Distributed model step by step. Is it possible to move > forward 'without a fresh installation' of AIO Duplex and Distributed? > > Regards > > Danishka > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Wed Feb 17 16:39:40 2021 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 17 Feb 2021 16:39:40 +0000 Subject: [Starlingx-discuss] Moving from Simplex to Distribute In-Reply-To: References: Message-ID: The reason is that each of the deployments are different enough internally that it would be challenging to convert from one to another. If you want a solution that scales, the Standard or Standard with External Storage configurations are the right deployment methods. brucej -----Original Message----- From: lists at optimcloud.com Sent: Wednesday, February 17, 2021 7:51 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Moving from Simplex to Distribute On 2021-02-17 22:43, Zvonar, Bill wrote: > Hi Danishka – it’s not possible to move from Simplex to Duplex without > a re-install. can you elaborate on the reason this is the case, im also faced with best deployment method that will scale > > Bill… > > From: open infra > Sent: Monday, February 15, 2021 10:55 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Moving from Simplex to Distribute > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > I have deployed StarlingX AIO Simplex with the idea of moving to > Duplex and Distributed model step by step. Is it possible to move > forward 'without a fresh installation' of AIO Duplex and Distributed? > > Regards > > Danishka > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From thomas.yungblut at gmail.com Wed Feb 17 18:11:58 2021 From: thomas.yungblut at gmail.com (Thomas Yungblut) Date: Wed, 17 Feb 2021 13:11:58 -0500 Subject: [Starlingx-discuss] Adding IoT devices to StarlingX Subcloud as K8s worker nodes Message-ID: <7bf67171-958f-641f-56e0-1d48eef602ca@gmail.com> Hello, My team has been attempting to use StarlingX (R5, Distributed cloud deployment) to monitor and maintain edge cloud servers. Part of our desired workload is a K8s deployment from one of the subcloud controllers. Ideally, some pods of this workload would run on lightweight node such as a Raspberry Pi (running only K8s, not StarlingX software) but we are having trouble joining this device to the subcloud's existing K8s cluster. There seem to have been features in StarlingX related to this in previous releases but perhaps they have been deprecated. I've attached screenshots / links below. Any assistance joining outside nodes to the K8s cluster would be appreciated. Screenshots from Open Infrastructure Foundation 's YouTube channel linked below, ~ 35:45 - 36:30 https://youtu.be/Wg095dkagBc?t=2145 -- Thomas Yungblut -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: klklffflpblkcdfd.png Type: image/png Size: 439693 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pgoololmoknijkdd.png Type: image/png Size: 500962 bytes Desc: not available URL: From openinfradn at gmail.com Wed Feb 17 19:56:03 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 18 Feb 2021 01:26:03 +0530 Subject: [Starlingx-discuss] Moving from Simplex to Distribute In-Reply-To: References: Message-ID: Thank you Zvonar and Jones! On Wed, Feb 17, 2021 at 10:13 PM Jones, Bruce E wrote: > The reason is that each of the deployments are different enough internally > that it would be challenging to convert from one to another. If you want a > solution that scales, the Standard or Standard with External Storage > configurations are the right deployment methods. > > brucej > > -----Original Message----- > From: lists at optimcloud.com > Sent: Wednesday, February 17, 2021 7:51 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Moving from Simplex to Distribute > > On 2021-02-17 22:43, Zvonar, Bill wrote: > > Hi Danishka – it’s not possible to move from Simplex to Duplex without > > a re-install. > > can you elaborate on the reason this is the case, im also faced with best > deployment method that will scale > > > > > Bill… > > > > From: open infra > > Sent: Monday, February 15, 2021 10:55 PM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Moving from Simplex to Distribute > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > Hi, > > > > I have deployed StarlingX AIO Simplex with the idea of moving to > > Duplex and Distributed model step by step. Is it possible to move > > forward 'without a fresh installation' of AIO Duplex and Distributed? > > > > Regards > > > > Danishka > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Feb 17 20:24:06 2021 From: scott.little at windriver.com (Scott Little) Date: Wed, 17 Feb 2021 15:24:06 -0500 Subject: [Starlingx-discuss] CENGN outage Message-ID: <066dc7f7-be4b-8e04-80b0-2ce78be0f628@windriver.com> CENGN is having network issues. http://mirror.starlingx.cengn.ca/ is unavailable. They are working it. From build.starlingx at gmail.com Wed Feb 17 22:57:53 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Feb 2021 17:57:53 -0500 (EST) Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images_layered - Build # 91 - Failure! Message-ID: <219557445.149.1613602673908.JavaMail.javamailuser@localhost> Project: STX_build_docker_images_layered Build #: 91 Status: Failure Timestamp: 20210217T175331Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210217T173034Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210217T173034Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210217T173034Z/logs MASTER_BUILD_NUMBER: 115 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210217T173034Z/logs MASTER_JOB_NAME: STX_build_layer_containers_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers PUBLISH_TIMESTAMP: 20210217T173034Z DOCKER_BUILD_ID: jenkins-master-containers-20210217T173034Z-builder TIMESTAMP: 20210217T173034Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210217T173034Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210217T173034Z/outputs From build.starlingx at gmail.com Wed Feb 17 22:57:55 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Feb 2021 17:57:55 -0500 (EST) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 115 - Failure! Message-ID: <1914363682.152.1613602675831.JavaMail.javamailuser@localhost> Project: STX_build_layer_containers_master_master Build #: 115 Status: Failure Timestamp: 20210217T173034Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210217T173034Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false From build.starlingx at gmail.com Wed Feb 17 23:29:44 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Feb 2021 18:29:44 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1556 - Failure! Message-ID: <2079916368.155.1613604585122.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1556 Status: Failure Timestamp: 20210217T231501Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210217T230007Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210217T230007Z DOCKER_BUILD_ID: jenkins-master-distro-20210217T230007Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210217T230007Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210217T230007Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/distro From maryx.camp at intel.com Thu Feb 18 02:21:40 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 18 Feb 2021 02:21:40 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 17-Feb-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 17-Feb-21 All -- reviews merged since last meeting: 3 All -- bug status -- 17 total - team agrees to defer all low priority LP until the upstreaming effort is completed. 13 LP are WIP against API documentation, which is generated from source code (low priority). Those reviews are here: https://review.opendev.org/#/q/project:starlingx/config New on 10Feb21: Documentation needs to be updated for LAG and VLAN type interfaces [https://bugs.launchpad.net/starlingx/+bug/1915285] Status/questions/opens We talked about tools. Ron & Juanita are using Visual Studio Code from Microsoft and it's working out very well. Mary knows other folks at Intel who are using this also, look into it. Back in 2020 we discussed improving the process for release notes for R5.0 We looked at the Nova example here: https://docs.openstack.org/releasenotes/nova/ussuri.html We agreed that the STX release notes process doesn't need to be as complicated as the OpenStack tooling and we can do something simpler, like keep a running list in a RST file. Suggestion to start with 1 standing "R5 RN" review and get the developers to add bullets as they add features. Team agrees with this plan. AR Mary set this up and bring to next week's release team meeting. The STX RN landing page has 13 project-specific RN links (Bare Metal, Clients, Configuration, etc) They haven't been updated since Release 1: https://docs.starlingx.io/releasenotes/index.html#release-notes Team agrees this is old info, no longer maintained, and can likely be deleted. AR Mary to make a review deleting these bullets, be sure to get broad review and loop in cores from all the sub-projects for their approval. Ron mentioned there are some existing ASCII formatted tables in the docs that are fragile. Ron suggests converting them into list tables, to make it easier for future updates and for downstreaming. Team agrees. From openinfradn at gmail.com Thu Feb 18 02:54:39 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 18 Feb 2021 08:24:39 +0530 Subject: [Starlingx-discuss] Alarm "Memory threshold" In-Reply-To: References: <1821C2AB-A1F9-46C4-9905-623216984360@gmail.com> Message-ID: Hi, Is there a way we can adjust memory and CPU of controller-0 before we deploy starlingx? In my case without having a single VM under OpenStack has following memory usage though host machine (I use AIO simplex on the virtualized environment) has enough memory. controller-0:~$ free -m total used free shared buff/cache available Mem: 17894 11131 406 65 6356 5967 Swap: 0 0 0 On Sun, Feb 14, 2021 at 7:55 PM Ildiko Vancsa wrote: > Hi, > > I saw that you have a ‘reserved for platform’ memory entry with the value > of 4600 MiB. > > I’ve found entries below that report platform memory usage having an over > 100% value: > > daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info > platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: > 5274.3 MiB (Base: 4684.1, k8s-system: 590.2), k8s-addon: 0.0 > > So while the overall memory usage in the system isn’t over the threshold I > assume the usage of that reserved amount of memory still exceeds it. Have > you seen any configuration option to increase the amount of platform memory? > > You can also look into collectd if that leads you closer to what it is > reading to get those values. > > I don’t have access to a StarlingX install to look into this, so I cannot > tell where to look for that. > > I found some documentation for Kubernetes to set memory limits, but I’m > not sure that applies here. ( > https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/ > ) > > Have you looked into the above already? > > Thanks, > Ildikó > > > > On Feb 14, 2021, at 10:51, Rai, Ankush wrote: > > > > Not sure exactly which log file to check. Captured some data here, > please check if this can help. > > > > Software Version: 20.06 > > Memory: > > Reserved for Platform: 4600 MiB > > Usable Total: 13293 MiB > > Available: 13293 MiB > > > > The fm has logged these events. > > > > fm-event.log:2021-02-10T08:57:25.000 controller-0 fmManager: info { > "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold > exceeded ; threshold 80.00%, actual 88.83%", "entity_instance_id" : > "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", > "severity" : "major", "state" : "set", "timestamp" : "2021-02-10 > 08:57:25.484131" } > > fm-event.log:2021-02-10T09:17:55.000 controller-0 fmManager: info { > "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold > exceeded ; threshold 90.00%, actual 95.19%", "entity_instance_id" : > "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", > "severity" : "critical", "state" : "set", "timestamp" : "2021-02-10 > 09:17:55.482154" } > > > > [snip] > > > > > From: Ildiko Vancsa > > Sent: Sunday, February 14, 2021 2:42 PM > > To: Rai, Ankush > > Cc: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] Alarm "Memory threshold" > > > > > > Hi Ankush, > > > > Do you have any log entries on the system you could share here that show > the memory readings the alarm might be triggered by? > > > > Thanks, > > Ildikó > > > > > > > On Feb 14, 2021, at 09:26, Rai, Ankush > wrote: > > > > > > Hi, > > > > > > Below alarm is getting raised for every node of the central and edge > cloud. > > > > > > “Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%” > > > > > > It looks to be the false alarm as nodes are having enough available > memory. Please config the root cause of this alarm. > > > > > > Thanks, > > > Ankush > > > > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > > http://secure-web.cisco.com/1PV8HDMvVRW0RqVu-SKTRWk6WWzv0LMknZ5lZPnq_3AC44IJMRoTkCmBzbDzfSRKy1sR51Ro2VO06UwWt-4FdJVml-0HlHOjpHq9J5nmMvsPseu4RcqzNaQvw5haLJsQqt0HLmSNJtbid7y2kxIipLb1hZwBP5gx-ZIdH71ha2sHnU9iy8kbtB51Y5tHpLdGIcUYnJfot6KLcOe6xS2sPGBEOGVhleDZ83q7d7l9kxhO6HkHdmUPURKbTELhCqw8pf-r7fiE8usjs79rMvOl3im3n1pT0kPRXHzHN_DY7SMgFhzOa2MXDTiDCKn-33Ump/http%3A%2F%2Flists.starlingx.io%2Fcgi-bin%2Fmailman%2Flistinfo%2Fstarlingx-discuss > > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Thu Feb 18 07:33:24 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 18 Feb 2021 07:33:24 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210217T163107Z Message-ID: Sanity Test from 2021-February-17 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210217T163107Z/outputs/iso/ ) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210217T163107Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz I set the status on RED, because, during the Daily Sanity execution, we were unable to install STANDARD configuration on baremetal. See: https://bugs.launchpad.net/starlingx/+bug/1915864 Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From alexandru.dimofte at intel.com Thu Feb 18 10:02:29 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 18 Feb 2021 10:02:29 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210217T232923Z Message-ID: Sanity Test from 2021-February-17 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210217T232923Z/outputs/iso/ ) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210217T232923Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz I set the status on RED, because, during the Daily Sanity execution, we were unable to install STANDARD configuration on baremetal. See: https://bugs.launchpad.net/starlingx/+bug/1915864 Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From tuongvx at dcn.ssu.ac.kr Thu Feb 18 10:39:50 2021 From: tuongvx at dcn.ssu.ac.kr (Xuan Tuong Vu) Date: Thu, 18 Feb 2021 19:39:50 +0900 Subject: [Starlingx-discuss] Upgrade Kubernetes version questions Message-ID: Hi everybody, I found the document for upgrading K8s version here at https://docs.starlingx.io/configuration/k8s_upgrade.html#build-patches-manually However, I got stuck when the document does not clearly show how to build K8s upgrade patches. Since after installing R4 version, I could not found any folders/fille in that document for building such as: /stx/downloads, sysinv/sysinv/sysinv/sysinv/common/kubernetes.py, /stx/integ, ... Could anybody tell me how to get it? Or are there any other documents? Thank you, *Xuan-Tuong Vu (Master Student)* Distributed Cloud and Network lab (DCN Lab) School of Electronic Engineering, Soongsil Univ. 369 Sangdo-ro, Dongjak-gu, Seoul. Mobile +82-10-6727-1895 Email tuongvx at dcn.ssu.ac.kr -------------- next part -------------- An HTML attachment was scrubbed... URL: From Venkata.Veldanda at radisys.com Thu Feb 18 14:17:14 2021 From: Venkata.Veldanda at radisys.com (Venkata Ramana Veldanda) Date: Thu, 18 Feb 2021 14:17:14 +0000 Subject: [Starlingx-discuss] Private Docker Registries Message-ID: Hi, I am using STX4.0 Is there a supported way (either a system CLI or GUI) to add private docker registry ?. For example - I try to add the following here and it would reset upon the reboot. controller-0:/etc/docker# cat /etc/docker/daemon.json { "insecure-registries" : ["artifactory.myownregistry.com:8093"] } Or do we only do this the puppet manifest way? Regards, Venkata Veldanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tao.Liu at windriver.com Thu Feb 18 14:52:34 2021 From: Tao.Liu at windriver.com (Liu, Tao) Date: Thu, 18 Feb 2021 14:52:34 +0000 Subject: [Starlingx-discuss] Alarm "Memory threshold" In-Reply-To: References: <1821C2AB-A1F9-46C4-9905-623216984360@gmail.com> Message-ID: Yes, you can modify the host memory reservation via system command. # system host-memory-modify -m system host-memory-modify -m 6400 1 0 # show current memory configuration system host-memory-show system host-memory-list Tao From: open infra Sent: Wednesday, February 17, 2021 9:55 PM To: Ildiko Vancsa Cc: Rai, Ankush ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Alarm "Memory threshold" [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, Is there a way we can adjust memory and CPU of controller-0 before we deploy starlingx? In my case without having a single VM under OpenStack has following memory usage though host machine (I use AIO simplex on the virtualized environment) has enough memory. controller-0:~$ free -m total used free shared buff/cache available Mem: 17894 11131 406 65 6356 5967 Swap: 0 0 0 On Sun, Feb 14, 2021 at 7:55 PM Ildiko Vancsa > wrote: Hi, I saw that you have a ‘reserved for platform’ memory entry with the value of 4600 MiB. I’ve found entries below that report platform memory usage having an over 100% value: daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.1, k8s-system: 590.2), k8s-addon: 0.0 So while the overall memory usage in the system isn’t over the threshold I assume the usage of that reserved amount of memory still exceeds it. Have you seen any configuration option to increase the amount of platform memory? You can also look into collectd if that leads you closer to what it is reading to get those values. I don’t have access to a StarlingX install to look into this, so I cannot tell where to look for that. I found some documentation for Kubernetes to set memory limits, but I’m not sure that applies here. (https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) Have you looked into the above already? Thanks, Ildikó > On Feb 14, 2021, at 10:51, Rai, Ankush > wrote: > > Not sure exactly which log file to check. Captured some data here, please check if this can help. > > Software Version: 20.06 > Memory: > Reserved for Platform: 4600 MiB > Usable Total: 13293 MiB > Available: 13293 MiB > > The fm has logged these events. > > fm-event.log:2021-02-10T08:57:25.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 80.00%, actual 88.83%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "major", "state" : "set", "timestamp" : "2021-02-10 08:57:25.484131" } > fm-event.log:2021-02-10T09:17:55.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "critical", "state" : "set", "timestamp" : "2021-02-10 09:17:55.482154" } > [snip] > > From: Ildiko Vancsa > > Sent: Sunday, February 14, 2021 2:42 PM > To: Rai, Ankush > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Alarm "Memory threshold" > > > Hi Ankush, > > Do you have any log entries on the system you could share here that show the memory readings the alarm might be triggered by? > > Thanks, > Ildikó > > > > On Feb 14, 2021, at 09:26, Rai, Ankush > wrote: > > > > Hi, > > > > Below alarm is getting raised for every node of the central and edge cloud. > > > > “Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%” > > > > It looks to be the false alarm as nodes are having enough available memory. Please config the root cause of this alarm. > > > > Thanks, > > Ankush > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://secure-web.cisco.com/1PV8HDMvVRW0RqVu-SKTRWk6WWzv0LMknZ5lZPnq_3AC44IJMRoTkCmBzbDzfSRKy1sR51Ro2VO06UwWt-4FdJVml-0HlHOjpHq9J5nmMvsPseu4RcqzNaQvw5haLJsQqt0HLmSNJtbid7y2kxIipLb1hZwBP5gx-ZIdH71ha2sHnU9iy8kbtB51Y5tHpLdGIcUYnJfot6KLcOe6xS2sPGBEOGVhleDZ83q7d7l9kxhO6HkHdmUPURKbTELhCqw8pf-r7fiE8usjs79rMvOl3im3n1pT0kPRXHzHN_DY7SMgFhzOa2MXDTiDCKn-33Ump/http%3A%2F%2Flists.starlingx.io%2Fcgi-bin%2Fmailman%2Flistinfo%2Fstarlingx-discuss > > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From OwenYuen at cmail.carleton.ca Wed Feb 17 21:26:02 2021 From: OwenYuen at cmail.carleton.ca (Owen Yuen) Date: Wed, 17 Feb 2021 21:26:02 +0000 Subject: [Starlingx-discuss] How to deploy workloads from StarlingX GUI/CLI In-Reply-To: References: , Message-ID: Thanks Mingyuan, A follow up question: what is the benefit of deploying an application with ‘system application’ over directly from helm? Thanks again Owen From: Qi, Mingyuan Sent: Tuesday, February 9, 2021 2:20 AM To: Owen Yuen; starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut; Aidan Seguin-McPeake Subject: RE: How to deploy workloads from StarlingX GUI/CLI [External Email] Hi Owen, You could create an armada application with app-gen-tool[0] and apply it by ‘system application’ CLI. There is no panel in StarlingX dashboard GUI to manage applications so far. [0] https://opendev.org/starlingx/tools/src/branch/master/app-gen-tool Mingyuan From: Owen Yuen Sent: Tuesday, February 9, 2021 10:25 To: starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut ; Aidan Seguin-McPeake Subject: [Starlingx-discuss] How to deploy workloads from StarlingX GUI/CLI Is it possible to deploy a workload via STX instead of via kubectl or kubernetes GUI directly? We are running an distributed cloud AIO duplex setup so our worker hosts are on con0 and 1 if that makes a difference. Also, how does StarlingX manage workloads from the GUI? Any help would be greatly appreciated. Thanks Owen This email contains links to content or websites. Always be cautious when clicking on external links or attachments. If in doubt, please forward suspicious emails to phishing at carleton.ca. -----End of Disclaimer----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Thu Feb 18 15:01:43 2021 From: lists at optimcloud.com (lists at optimcloud.com) Date: Thu, 18 Feb 2021 22:01:43 +0700 Subject: [Starlingx-discuss] incomplete openstack horizon ui Message-ID: <5f2d8946b0d908c1a38aca5cf11a461c@optimcloud.com> deployed stx aio on bare-metal and deployed openstack, says its running i can login to os horizon, yet under platform i dont see the normal links for launching instances under platfrm all its lists is Platform Software Management Host Inventory Data Networks Data Network Topology Storage Overview System Configuration soooo what did i miss..... From Ankush.Rai at commscope.com Thu Feb 18 15:03:45 2021 From: Ankush.Rai at commscope.com (Rai, Ankush) Date: Thu, 18 Feb 2021 15:03:45 +0000 Subject: [Starlingx-discuss] Alarm "Memory threshold" In-Reply-To: References: <1821C2AB-A1F9-46C4-9905-623216984360@gmail.com> Message-ID: Thanks Tao. Yes tried it and set the memory to 8000 MiB. With the increased memory, alarm is not raised in the system. With 6400 MiB it raise a Major alarm. Thanks, Ankush From: Liu, Tao Sent: Thursday, February 18, 2021 8:23 PM To: open infra ; Ildiko Vancsa Cc: Rai, Ankush ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Alarm "Memory threshold" Yes, you can modify the host memory reservation via system command. # system host-memory-modify -m system host-memory-modify -m 6 External (tao.liu at windriver.com) Report This Email FAQ Protection by INKY Yes, you can modify the host memory reservation via system command. # system host-memory-modify -m system host-memory-modify -m 6400 1 0 # show current memory configuration system host-memory-show system host-memory-list Tao From: open infra > Sent: Wednesday, February 17, 2021 9:55 PM To: Ildiko Vancsa > Cc: Rai, Ankush >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Alarm "Memory threshold" [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, Is there a way we can adjust memory and CPU of controller-0 before we deploy starlingx? In my case without having a single VM under OpenStack has following memory usage though host machine (I use AIO simplex on the virtualized environment) has enough memory. controller-0:~$ free -m total used free shared buff/cache available Mem: 17894 11131 406 65 6356 5967 Swap: 0 0 0 On Sun, Feb 14, 2021 at 7:55 PM Ildiko Vancsa > wrote: Hi, I saw that you have a ‘reserved for platform’ memory entry with the value of 4600 MiB. I’ve found entries below that report platform memory usage having an over 100% value: daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: 5274.3 MiB (Base: 4684.1, k8s-system: 590.2), k8s-addon: 0.0 So while the overall memory usage in the system isn’t over the threshold I assume the usage of that reserved amount of memory still exceeds it. Have you seen any configuration option to increase the amount of platform memory? You can also look into collectd if that leads you closer to what it is reading to get those values. I don’t have access to a StarlingX install to look into this, so I cannot tell where to look for that. I found some documentation for Kubernetes to set memory limits, but I’m not sure that applies here. (https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) Have you looked into the above already? Thanks, Ildikó > On Feb 14, 2021, at 10:51, Rai, Ankush > wrote: > > Not sure exactly which log file to check. Captured some data here, please check if this can help. > > Software Version: 20.06 > Memory: > Reserved for Platform: 4600 MiB > Usable Total: 13293 MiB > Available: 13293 MiB > > The fm has logged these events. > > fm-event.log:2021-02-10T08:57:25.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 80.00%, actual 88.83%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "major", "state" : "set", "timestamp" : "2021-02-10 08:57:25.484131" } > fm-event.log:2021-02-10T09:17:55.000 controller-0 fmManager: info { "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%", "entity_instance_id" : "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", "severity" : "critical", "state" : "set", "timestamp" : "2021-02-10 09:17:55.482154" } > [snip] > > From: Ildiko Vancsa > > Sent: Sunday, February 14, 2021 2:42 PM > To: Rai, Ankush > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Alarm "Memory threshold" > > > Hi Ankush, > > Do you have any log entries on the system you could share here that show the memory readings the alarm might be triggered by? > > Thanks, > Ildikó > > > > On Feb 14, 2021, at 09:26, Rai, Ankush > wrote: > > > > Hi, > > > > Below alarm is getting raised for every node of the central and edge cloud. > > > > “Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%” > > > > It looks to be the false alarm as nodes are having enough available memory. Please config the root cause of this alarm. > > > > Thanks, > > Ankush > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://secure-web.cisco.com/1PV8HDMvVRW0RqVu-SKTRWk6WWzv0LMknZ5lZPnq_3AC44IJMRoTkCmBzbDzfSRKy1sR51Ro2VO06UwWt-4FdJVml-0HlHOjpHq9J5nmMvsPseu4RcqzNaQvw5haLJsQqt0HLmSNJtbid7y2kxIipLb1hZwBP5gx-ZIdH71ha2sHnU9iy8kbtB51Y5tHpLdGIcUYnJfot6KLcOe6xS2sPGBEOGVhleDZ83q7d7l9kxhO6HkHdmUPURKbTELhCqw8pf-r7fiE8usjs79rMvOl3im3n1pT0kPRXHzHN_DY7SMgFhzOa2MXDTiDCKn-33Ump/http%3A%2F%2Flists.starlingx.io%2Fcgi-bin%2Fmailman%2Flistinfo%2Fstarlingx-discuss > > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Thu Feb 18 15:05:51 2021 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 18 Feb 2021 15:05:51 +0000 Subject: [Starlingx-discuss] Upgrade Kubernetes version questions In-Reply-To: References: Message-ID: This old (deprecated) wiki page has info for creating a no-delta kubernetes upgrade patch. It allows you to take the version we deploy now, and package it as a higher version, so you can make sure your build environment is working and the patches are setup properly. https://wiki.openstack.org/wiki/StarlingX/Containers/K8sUpgradesTesting You need to have a build environment setup in order to build the new rpms for those patches. Al ________________________________ From: Xuan Tuong Vu Sent: Thursday, February 18, 2021 5:39 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Upgrade Kubernetes version questions [Please note: This e-mail is from an EXTERNAL e-mail address] Hi everybody, I found the document for upgrading K8s version here at https://docs.starlingx.io/configuration/k8s_upgrade.html#build-patches-manually However, I got stuck when the document does not clearly show how to build K8s upgrade patches. Since after installing R4 version, I could not found any folders/fille in that document for building such as: /stx/downloads, sysinv/sysinv/sysinv/sysinv/common/kubernetes.py, /stx/integ, ... Could anybody tell me how to get it? Or are there any other documents? Thank you, Xuan-Tuong Vu (Master Student) Distributed Cloud and Network lab (DCN Lab) School of Electronic Engineering, Soongsil Univ. 369 Sangdo-ro, Dongjak-gu, Seoul. Mobile +82-10-6727-1895 Email tuongvx at dcn.ssu.ac.kr -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Feb 18 15:11:42 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 18 Feb 2021 20:41:42 +0530 Subject: [Starlingx-discuss] How low latency implemented in STX? Message-ID: Hi, I would like to know how low latency implemented in StarlingX. - Is it implemented only in starlingx itself or Is it implemented in Kubernetes shipped with starlingx? Any optimization has been done to Neutron (under OpenStack) in order to achieve low latency with virtualized networking? or do we use exact same upstream OpenStack components under StarlingX? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Feb 18 15:20:20 2021 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 18 Feb 2021 15:20:20 +0000 Subject: [Starlingx-discuss] How low latency implemented in STX? In-Reply-To: References: Message-ID: There are several low latency features in StarlingX, depending on what you mean by low latency. StarlingX supports the rt-linux kernel, which significantly improves kernel scheduling and reduces rt application latency. It also supports OVS DPDK, which greatly reduces network packet delivery latency, which I believe is only supported today in Neutron. There is work in progress in the community to enable DPDK in Kubernetes but AFAIK that work has not yet been picked up by StarlingX. brucej From: open infra Sent: Thursday, February 18, 2021 7:12 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How low latency implemented in STX? Hi, I would like to know how low latency implemented in StarlingX. - Is it implemented only in starlingx itself or Is it implemented in Kubernetes shipped with starlingx? Any optimization has been done to Neutron (under OpenStack) in order to achieve low latency with virtualized networking? or do we use exact same upstream OpenStack components under StarlingX? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Feb 18 15:40:25 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 18 Feb 2021 21:10:25 +0530 Subject: [Starlingx-discuss] Alarm "Memory threshold" In-Reply-To: References: <1821C2AB-A1F9-46C4-9905-623216984360@gmail.com> Message-ID: Thank Liu. On Thu, Feb 18, 2021 at 8:22 PM Liu, Tao wrote: > Yes, you can modify the host memory reservation via system command. > > > > # system host-memory-modify -m > > > system host-memory-modify -m 6400 1 0 > > > > # show current memory configuration > > system host-memory-show > > system host-memory-list > > > > Tao > > > > *From:* open infra > *Sent:* Wednesday, February 17, 2021 9:55 PM > *To:* Ildiko Vancsa > *Cc:* Rai, Ankush ; > starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Alarm "Memory threshold" > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > > > Is there a way we can adjust memory and CPU of controller-0 before we > deploy starlingx? > > In my case without having a single VM under OpenStack has following memory > usage though host machine (I use AIO simplex on the virtualized > environment) has enough memory. > > > > controller-0:~$ free -m > total used free shared buff/cache > available > Mem: 17894 11131 406 65 6356 > 5967 > Swap: 0 0 0 > > > > On Sun, Feb 14, 2021 at 7:55 PM Ildiko Vancsa > wrote: > > Hi, > > I saw that you have a ‘reserved for platform’ memory entry with the value > of 4600 MiB. > > I’ve found entries below that report platform memory usage having an over > 100% value: > > daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info > platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: > 5274.3 MiB (Base: 4684.1, k8s-system: 590.2), k8s-addon: 0.0 > > So while the overall memory usage in the system isn’t over the threshold I > assume the usage of that reserved amount of memory still exceeds it. Have > you seen any configuration option to increase the amount of platform memory? > > You can also look into collectd if that leads you closer to what it is > reading to get those values. > > I don’t have access to a StarlingX install to look into this, so I cannot > tell where to look for that. > > I found some documentation for Kubernetes to set memory limits, but I’m > not sure that applies here. ( > https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/ > ) > > Have you looked into the above already? > > Thanks, > Ildikó > > > > On Feb 14, 2021, at 10:51, Rai, Ankush wrote: > > > > Not sure exactly which log file to check. Captured some data here, > please check if this can help. > > > > Software Version: 20.06 > > Memory: > > Reserved for Platform: 4600 MiB > > Usable Total: 13293 MiB > > Available: 13293 MiB > > > > The fm has logged these events. > > > > fm-event.log:2021-02-10T08:57:25.000 controller-0 fmManager: info { > "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold > exceeded ; threshold 80.00%, actual 88.83%", "entity_instance_id" : > "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", > "severity" : "major", "state" : "set", "timestamp" : "2021-02-10 > 08:57:25.484131" } > > fm-event.log:2021-02-10T09:17:55.000 controller-0 fmManager: info { > "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold > exceeded ; threshold 90.00%, actual 95.19%", "entity_instance_id" : > "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", > "severity" : "critical", "state" : "set", "timestamp" : "2021-02-10 > 09:17:55.482154" } > > > > [snip] > > > > > From: Ildiko Vancsa > > Sent: Sunday, February 14, 2021 2:42 PM > > To: Rai, Ankush > > Cc: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] Alarm "Memory threshold" > > > > > > Hi Ankush, > > > > Do you have any log entries on the system you could share here that show > the memory readings the alarm might be triggered by? > > > > Thanks, > > Ildikó > > > > > > > On Feb 14, 2021, at 09:26, Rai, Ankush > wrote: > > > > > > Hi, > > > > > > Below alarm is getting raised for every node of the central and edge > cloud. > > > > > > “Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%” > > > > > > It looks to be the false alarm as nodes are having enough available > memory. Please config the root cause of this alarm. > > > > > > Thanks, > > > Ankush > > > > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > > http://secure-web.cisco.com/1PV8HDMvVRW0RqVu-SKTRWk6WWzv0LMknZ5lZPnq_3AC44IJMRoTkCmBzbDzfSRKy1sR51Ro2VO06UwWt-4FdJVml-0HlHOjpHq9J5nmMvsPseu4RcqzNaQvw5haLJsQqt0HLmSNJtbid7y2kxIipLb1hZwBP5gx-ZIdH71ha2sHnU9iy8kbtB51Y5tHpLdGIcUYnJfot6KLcOe6xS2sPGBEOGVhleDZ83q7d7l9kxhO6HkHdmUPURKbTELhCqw8pf-r7fiE8usjs79rMvOl3im3n1pT0kPRXHzHN_DY7SMgFhzOa2MXDTiDCKn-33Ump/http%3A%2F%2Flists.starlingx.io%2Fcgi-bin%2Fmailman%2Flistinfo%2Fstarlingx-discuss > > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Feb 18 15:50:51 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 18 Feb 2021 21:20:51 +0530 Subject: [Starlingx-discuss] How low latency implemented in STX? In-Reply-To: References: Message-ID: Hi On Thu, Feb 18, 2021 at 8:50 PM Jones, Bruce E wrote: > There are several low latency features in StarlingX, depending on what you > mean by low latency. > I just asked to understand as 'ultra-low latency' feature is mentioned in the stx web site. > > StarlingX supports the rt-linux kernel, which significantly improves > kernel scheduling and reduces rt application latency. > So, we use low latency profile with RT kernel? Or customized profile that specific to stx? > > It also supports OVS DPDK, which greatly reduces network packet delivery > latency, which I believe is only supported today in Neutron. There is work > in progress in the communitfeature y to enable DPDK in Kubernetes but AFAIK > that work has not yet been picked up by StarlingX. > Thanks for sharing this information. Could be a dumb question: Does it provide any specific protocols to improve the performance of video streaming etc? > > brucej > > > > *From:* open infra > *Sent:* Thursday, February 18, 2021 7:12 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] How low latency implemented in STX? > > > > Hi, > > > > > > I would like to know how low latency implemented in StarlingX. > > - Is it implemented only in starlingx itself > > or > > Is it implemented in Kubernetes shipped with starlingx? > > Any optimization has been done to Neutron (under OpenStack) in order to > achieve low latency with virtualized networking? > > or do we use exact same upstream OpenStack components under StarlingX? > > > > Regards, > > Danishka > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Feb 18 15:55:42 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 18 Feb 2021 21:25:42 +0530 Subject: [Starlingx-discuss] Alarm "Memory threshold" In-Reply-To: References: <1821C2AB-A1F9-46C4-9905-623216984360@gmail.com> Message-ID: After increasing to 6400MiB I see intermittent outage in the stx dashboard and also noticed multiple warnings of 'Multi-Node Recovery Mode' in events. I did lock the host before set reserved memory for the platform and then unlocked the host. On Thu, Feb 18, 2021 at 9:10 PM open infra wrote: > Thank Liu. > > > > > > On Thu, Feb 18, 2021 at 8:22 PM Liu, Tao wrote: > >> Yes, you can modify the host memory reservation via system command. >> >> >> >> # system host-memory-modify -m >> >> >> system host-memory-modify -m 6400 1 0 >> >> >> >> # show current memory configuration >> >> system host-memory-show >> >> system host-memory-list >> >> >> >> Tao >> >> >> >> *From:* open infra >> *Sent:* Wednesday, February 17, 2021 9:55 PM >> *To:* Ildiko Vancsa >> *Cc:* Rai, Ankush ; >> starlingx-discuss at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] Alarm "Memory threshold" >> >> >> >> [Please note: This e-mail is from an EXTERNAL e-mail address] >> >> Hi, >> >> >> >> Is there a way we can adjust memory and CPU of controller-0 before we >> deploy starlingx? >> >> In my case without having a single VM under OpenStack has following >> memory usage though host machine (I use AIO simplex on the virtualized >> environment) has enough memory. >> >> >> >> controller-0:~$ free -m >> total used free shared buff/cache >> available >> Mem: 17894 11131 406 65 6356 >> 5967 >> Swap: 0 0 0 >> >> >> >> On Sun, Feb 14, 2021 at 7:55 PM Ildiko Vancsa >> wrote: >> >> Hi, >> >> I saw that you have a ‘reserved for platform’ memory entry with the value >> of 4600 MiB. >> >> I’ve found entries below that report platform memory usage having an over >> 100% value: >> >> daemon.log:2021-02-14T07:14:55.000 controller-0 collectd[130887]: info >> platform memory usage: Usage: 114.7%; Reserved: 4600.0 MiB, Platform: >> 5274.3 MiB (Base: 4684.1, k8s-system: 590.2), k8s-addon: 0.0 >> >> So while the overall memory usage in the system isn’t over the threshold >> I assume the usage of that reserved amount of memory still exceeds it. Have >> you seen any configuration option to increase the amount of platform memory? >> >> You can also look into collectd if that leads you closer to what it is >> reading to get those values. >> >> I don’t have access to a StarlingX install to look into this, so I cannot >> tell where to look for that. >> >> I found some documentation for Kubernetes to set memory limits, but I’m >> not sure that applies here. ( >> https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/ >> ) >> >> Have you looked into the above already? >> >> Thanks, >> Ildikó >> >> >> > On Feb 14, 2021, at 10:51, Rai, Ankush >> wrote: >> > >> > Not sure exactly which log file to check. Captured some data here, >> please check if this can help. >> > >> > Software Version: 20.06 >> > Memory: >> > Reserved for Platform: 4600 MiB >> > Usable Total: 13293 MiB >> > Available: 13293 MiB >> > >> > The fm has logged these events. >> > >> > fm-event.log:2021-02-10T08:57:25.000 controller-0 fmManager: info { >> "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold >> exceeded ; threshold 80.00%, actual 88.83%", "entity_instance_id" : >> "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", >> "severity" : "major", "state" : "set", "timestamp" : "2021-02-10 >> 08:57:25.484131" } >> > fm-event.log:2021-02-10T09:17:55.000 controller-0 fmManager: info { >> "event_log_id" : "100.103", "reason_text" : "Platform Memory threshold >> exceeded ; threshold 90.00%, actual 95.19%", "entity_instance_id" : >> "region=RegionOne.system=6e232236-df0a-4bc5-9006-76f782b5493f.host=controller-0", >> "severity" : "critical", "state" : "set", "timestamp" : "2021-02-10 >> 09:17:55.482154" } >> > >> >> [snip] >> >> > >> > From: Ildiko Vancsa >> > Sent: Sunday, February 14, 2021 2:42 PM >> > To: Rai, Ankush >> > Cc: starlingx-discuss at lists.starlingx.io >> > Subject: Re: [Starlingx-discuss] Alarm "Memory threshold" >> > >> > >> > Hi Ankush, >> > >> > Do you have any log entries on the system you could share here that >> show the memory readings the alarm might be triggered by? >> > >> > Thanks, >> > Ildikó >> > >> > >> > > On Feb 14, 2021, at 09:26, Rai, Ankush >> wrote: >> > > >> > > Hi, >> > > >> > > Below alarm is getting raised for every node of the central and edge >> cloud. >> > > >> > > “Platform Memory threshold exceeded ; threshold 90.00%, actual 95.19%” >> > > >> > > It looks to be the false alarm as nodes are having enough available >> memory. Please config the root cause of this alarm. >> > > >> > > Thanks, >> > > Ankush >> > > >> > > >> > > _______________________________________________ >> > > Starlingx-discuss mailing list >> > > Starlingx-discuss at lists.starlingx.io >> > > >> http://secure-web.cisco.com/1PV8HDMvVRW0RqVu-SKTRWk6WWzv0LMknZ5lZPnq_3AC44IJMRoTkCmBzbDzfSRKy1sR51Ro2VO06UwWt-4FdJVml-0HlHOjpHq9J5nmMvsPseu4RcqzNaQvw5haLJsQqt0HLmSNJtbid7y2kxIipLb1hZwBP5gx-ZIdH71ha2sHnU9iy8kbtB51Y5tHpLdGIcUYnJfot6KLcOe6xS2sPGBEOGVhleDZ83q7d7l9kxhO6HkHdmUPURKbTELhCqw8pf-r7fiE8usjs79rMvOl3im3n1pT0kPRXHzHN_DY7SMgFhzOa2MXDTiDCKn-33Ump/http%3A%2F%2Flists.starlingx.io%2Fcgi-bin%2Fmailman%2Flistinfo%2Fstarlingx-discuss >> > >> > >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steven.Webster at windriver.com Thu Feb 18 17:39:45 2021 From: Steven.Webster at windriver.com (Webster, Steven) Date: Thu, 18 Feb 2021 17:39:45 +0000 Subject: [Starlingx-discuss] FYI: Upcoming uprev of SR-IOV CNI image to stx.5.0-v2.6-7-gb18123d8 Message-ID: Hi All, Just an FYI for those that are using a private image mirror or proxy. In the day or two, the SR-IOV CNI image will be up-versioned to pick up a few bug fixes. As part of this activity, the following image should be added to your mirror: docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8 Ref: https://review.opendev.org/c/starlingx/integ/+/776261 https://review.opendev.org/c/starlingx/root/+/776273 I will send another update to this list once the ansible-playbook repo is referencing the new image, but you are welcome to pull it in the meantime in preparation. There are no API / command changes required from a user of the CNI. Previous functionality has been verified and should work as before. Cheers, Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Feb 18 17:40:21 2021 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 18 Feb 2021 17:40:21 +0000 Subject: [Starlingx-discuss] How low latency implemented in STX? In-Reply-To: References: Message-ID: Forgive my top-posting email client. Yes, you can deploy the low latency profile for worker nodes and get the rt kernel. No, I’m not aware of anything in StarlingX specific to video streaming. I wouldn’t even know what’s needed, perhaps there are folks in the community who have that background. brucej From: open infra Sent: Thursday, February 18, 2021 7:51 AM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How low latency implemented in STX? Hi On Thu, Feb 18, 2021 at 8:50 PM Jones, Bruce E > wrote: There are several low latency features in StarlingX, depending on what you mean by low latency. I just asked to understand as 'ultra-low latency' feature is mentioned in the stx web site. StarlingX supports the rt-linux kernel, which significantly improves kernel scheduling and reduces rt application latency. So, we use low latency profile with RT kernel? Or customized profile that specific to stx? It also supports OVS DPDK, which greatly reduces network packet delivery latency, which I believe is only supported today in Neutron. There is work in progress in the communitfeature y to enable DPDK in Kubernetes but AFAIK that work has not yet been picked up by StarlingX. Thanks for sharing this information. Could be a dumb question: Does it provide any specific protocols to improve the performance of video streaming etc? brucej From: open infra > Sent: Thursday, February 18, 2021 7:12 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How low latency implemented in STX? Hi, I would like to know how low latency implemented in StarlingX. - Is it implemented only in starlingx itself or Is it implemented in Kubernetes shipped with starlingx? Any optimization has been done to Neutron (under OpenStack) in order to achieve low latency with virtualized networking? or do we use exact same upstream OpenStack components under StarlingX? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Thu Feb 18 17:49:36 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Thu, 18 Feb 2021 17:49:36 +0000 Subject: [Starlingx-discuss] FYI: Upcoming uprev of SR-IOV CNI image to stx.5.0-v2.6-7-gb18123d8 In-Reply-To: References: Message-ID: Thank you Steven for letting us know. Regards, Nicolae Jascanu, Ph.D. Software Engineer IOTG Galati, Romania From: Webster, Steven Sent: Thursday, February 18, 2021 19:40 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] FYI: Upcoming uprev of SR-IOV CNI image to stx.5.0-v2.6-7-gb18123d8 Hi All, Just an FYI for those that are using a private image mirror or proxy. In the day or two, the SR-IOV CNI image will be up-versioned to pick up a few bug fixes. As part of this activity, the following image should be added to your mirror: docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8 Ref: https://review.opendev.org/c/starlingx/integ/+/776261 https://review.opendev.org/c/starlingx/root/+/776273 I will send another update to this list once the ansible-playbook repo is referencing the new image, but you are welcome to pull it in the meantime in preparation. There are no API / command changes required from a user of the CNI. Previous functionality has been verified and should work as before. Cheers, Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sriram.Dharwadkar at commscope.com Thu Feb 18 18:00:41 2021 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Thu, 18 Feb 2021 18:00:41 +0000 Subject: [Starlingx-discuss] k8s pods cant use application-isolated cpus Message-ID: Hi, We are using Distributed-StarlingX-4.0. Our use case requires pods to use isolated cpu's, so executed the "system host-cpu-modify -f application-isolated -p0 20 controller-0" command to isolate the cpus. system host-cpu-list controller-1 +--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+----------------------+ | uuid | log_core | processor | phy_core | thread | processor_model | assigned_function | +--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+----------------------+ | e64e0004-3096-40c7-804c-d9c79d2ceb17 | 0 | 0 | 0 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Platform | | 28697ba9-c5e5-4d9b-9d97-346ec61c1fa7 | 1 | 0 | 1 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Platform | | 1b111e2e-84e4-4414-870f-73c759ecbac0 | 2 | 0 | 2 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 66933896-5c9f-4e0a-9527-72025172e1f6 | 3 | 0 | 3 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 027258c8-392e-4497-9e46-6acb007e759f | 4 | 0 | 4 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 5d88a23a-68de-4062-9313-173fd91a4bc5 | 5 | 0 | 5 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | dd804bbb-191c-4499-bda1-272f9bb923af | 6 | 0 | 6 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 7ae2a90b-59ca-4af0-b20e-20635af0aceb | 7 | 0 | 8 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | ada68f62-4d06-43eb-8ed9-cd6ec6d33487 | 8 | 0 | 9 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 8c2612fd-8a7f-4732-82ff-fb4717ab9275 | 9 | 0 | 10 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 01412e4d-c9ff-41e8-8fd3-c2a0d27ac7a3 | 10 | 0 | 11 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | c8e8b15b-538b-422e-9d72-2ba96982054a | 11 | 0 | 12 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | b2efd166-3125-46b9-95ac-4fb00a0de3c7 | 12 | 0 | 13 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | c33f3dd7-0b06-40c3-93b3-82f065c6db05 | 13 | 0 | 16 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | e33cda3e-7162-45d4-af79-4dff7f0bb07f | 14 | 0 | 17 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 551b7272-371e-4c05-b5c8-7708f95d3a97 | 15 | 0 | 18 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | d0522e8f-ebd4-49fd-8a0e-26de857f2caf | 16 | 0 | 19 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 84dd71f2-ad7a-46bb-8ae2-e62b2a817cc5 | 17 | 0 | 20 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | f4761605-3208-4758-b31c-d679cb0b631e | 18 | 0 | 21 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 66423b6f-134b-4166-bdff-9643d4ec8d4d | 19 | 0 | 25 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 45d38571-ca97-4b89-b442-fe5f5282bc5f | 20 | 0 | 26 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | fc87bbd5-2954-4730-9f67-b8fe8820faa7 | 21 | 0 | 27 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application-isolated | | 6b3a8a78-d191-47fa-af82-15166a7924fc | 22 | 0 | 28 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application | | 57739379-ab8f-4d3d-847a-f5aa9663bcfb | 23 | 0 | 29 | 0 | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | Application | +--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+----------------------+ But the k8s is not able to allocate isolated cpus to pods. Allocatable cpus are only 2. controller-0:/home/sysadmin# kubectl get node controller-0 -o json | jq '.status.allocatable' { "cpu": "2", "ephemeral-storage": "9391196145", "hugepages-1Gi": "20Gi", "hugepages-2Mi": "0", "intel.com/pci_sriov_net_physnet0": "32", "memory": "70207428Ki", "pods": "110" } I saw this bug report, which seems to be talking about the same issue. https://bugs.launchpad.net/starlingx/+bug/1894173 Kindly let me know when the fix is planned. If there is any work around to solve this issue, do let me know. Regards, Sirram -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Fri Feb 19 01:18:32 2021 From: austin.sun at intel.com (Sun, Austin) Date: Fri, 19 Feb 2021 01:18:32 +0000 Subject: [Starlingx-discuss] Private Docker Registries In-Reply-To: References: Message-ID: Hi Venkata: You can try command below : "system service-parameter-modify docker docker-registry url=xxxx" command to modify registry , suggest you lock/unlock controller once changes Thanks. BR Austin Sun. From: Venkata Ramana Veldanda Sent: Thursday, February 18, 2021 10:17 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Private Docker Registries Hi, I am using STX4.0 Is there a supported way (either a system CLI or GUI) to add private docker registry ?. For example - I try to add the following here and it would reset upon the reboot. controller-0:/etc/docker# cat /etc/docker/daemon.json { "insecure-registries" : ["artifactory.myownregistry.com:8093"] } Or do we only do this the puppet manifest way? Regards, Venkata Veldanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Fri Feb 19 03:09:08 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 18 Feb 2021 22:09:08 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1559 - Failure! Message-ID: <1811710909.162.1613704149565.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1559 Status: Failure Timestamp: 20210219T024636Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210219T023124Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210219T023124Z DOCKER_BUILD_ID: jenkins-master-distro-20210219T023124Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210219T023124Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210219T023124Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/distro From Venkata.Veldanda at radisys.com Fri Feb 19 14:05:26 2021 From: Venkata.Veldanda at radisys.com (Venkata Ramana Veldanda) Date: Fri, 19 Feb 2021 14:05:26 +0000 Subject: [Starlingx-discuss] Private Docker Registries In-Reply-To: References: Message-ID: Thankyou I just realized that STX4.0 uses containerd as the runtime and have got the below question now. The registry that we use locally is a HTTP based. In order to make changes to adapt to the insecure registry we tried to do the following but it never works. /etc/containerd/config.toml [plugins.cri.registry] [plugins.cri.registry.mirrors] # Begin of insecure registries [plugin.cri.registry.mirrors."artifactory. myownregistry.com:8093"] endpoint = ["http://artifactory.myownregistry.com:8093"] [plugins.cri.registry.configs."artifactory. myownregistry:8093".tls] insecure_skip_verify = true [plugins.cri.registry.configs."artifactory. myownregistry:8093".auth] username = "myusername" password = "mypassword" I understand that the /etc/containerd/config.toml is not the right place to edit as this is not persistent but was expecting it to work for the first time. Would the same "system service-parameter-add" works even for this or is there any other API to do this? Regards, Venkata Veldanda From: Sun, Austin Sent: Friday, February 19, 2021 6:49 AM To: Venkata Ramana Veldanda ; starlingx-discuss at lists.starlingx.io Subject: RE: Private Docker Registries The e-mail below is from an external source. Please do not open attachments or click links from an unknown or suspicious origin. Hi Venkata: You can try command below : "system service-parameter-modify docker docker-registry url=xxxx" command to modify registry , suggest you lock/unlock controller once changes Thanks. BR Austin Sun. From: Venkata Ramana Veldanda > Sent: Thursday, February 18, 2021 10:17 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Private Docker Registries Hi, I am using STX4.0 Is there a supported way (either a system CLI or GUI) to add private docker registry ?. For example - I try to add the following here and it would reset upon the reboot. controller-0:/etc/docker# cat /etc/docker/daemon.json { "insecure-registries" : ["artifactory.myownregistry.com:8093"] } Or do we only do this the puppet manifest way? Regards, Venkata Veldanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Sat Feb 20 19:36:28 2021 From: openinfradn at gmail.com (open infra) Date: Sun, 21 Feb 2021 01:06:28 +0530 Subject: [Starlingx-discuss] STX Data Networks Message-ID: Hi, I have a AIO simplex environment configured as per the starlingx r4 documentation. Under data network topology I noticed that controller-0 without any connections to available two networks. Is there documentation of how to create data networks of starlingx/openstack? To be honest, setting up the networking of starlingx and openstack is not clear. Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Mon Feb 22 01:54:50 2021 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Mon, 22 Feb 2021 01:54:50 +0000 Subject: [Starlingx-discuss] How to deploy workloads from StarlingX GUI/CLI In-Reply-To: References: , Message-ID: Owen, Several enhancements: 1. Multiple helm charts deployment at a time with dependency check. 2. Plugin supported to automatically generate system related overrides from sysinv api. 3. Re-generate/re-check overrides when system status is changing. 4. Override operation cmd 'system helm-override-xxx'. If your application does not need to get the system level information, nor has complicated helm chart dependencies, you can use helm chart as usual. Mingyuan From: Owen Yuen Sent: Thursday, February 18, 2021 5:26 To: Qi, Mingyuan ; starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut ; Aidan Seguin-McPeake Subject: RE: How to deploy workloads from StarlingX GUI/CLI Thanks Mingyuan, A follow up question: what is the benefit of deploying an application with 'system application' over directly from helm? Thanks again Owen From: Qi, Mingyuan Sent: Tuesday, February 9, 2021 2:20 AM To: Owen Yuen; starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut; Aidan Seguin-McPeake Subject: RE: How to deploy workloads from StarlingX GUI/CLI [External Email] Hi Owen, You could create an armada application with app-gen-tool[0] and apply it by 'system application' CLI. There is no panel in StarlingX dashboard GUI to manage applications so far. [0] https://opendev.org/starlingx/tools/src/branch/master/app-gen-tool Mingyuan From: Owen Yuen > Sent: Tuesday, February 9, 2021 10:25 To: starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut >; Aidan Seguin-McPeake > Subject: [Starlingx-discuss] How to deploy workloads from StarlingX GUI/CLI Is it possible to deploy a workload via STX instead of via kubectl or kubernetes GUI directly? We are running an distributed cloud AIO duplex setup so our worker hosts are on con0 and 1 if that makes a difference. Also, how does StarlingX manage workloads from the GUI? Any help would be greatly appreciated. Thanks Owen This email contains links to content or websites. Always be cautious when clicking on external links or attachments. If in doubt, please forward suspicious emails to phishing at carleton.ca. -----End of Disclaimer----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatindra.shashi at intel.com Mon Feb 22 10:53:22 2021 From: yatindra.shashi at intel.com (Shashi, Yatindra) Date: Mon, 22 Feb 2021 10:53:22 +0000 Subject: [Starlingx-discuss] STX Data Networks In-Reply-To: References: Message-ID: Hi Danishka, Have you looked in the below link , I hope it clarifies to you. Try with flat/VLAN data network and using it with Openstack for understanding. https://wiki.openstack.org/wiki/StarlingX/Networking Regards, Yatindra Shashi IoTG DE From: open infra Sent: 20 February 2021 20:36 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX Data Networks Hi, I have a AIO simplex environment configured as per the starlingx r4 documentation. Under data network topology I noticed that controller-0 without any connections to available two networks. Is there documentation of how to create data networks of starlingx/openstack? To be honest, setting up the networking of starlingx and openstack is not clear. Regards, Danishka Intel Deutschland GmbH Registered Address: Am Campeon 10, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Sharon Heck, Tiffany Doon Silva Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Mon Feb 22 14:42:31 2021 From: lists at optimcloud.com (lists at optimcloud.com) Date: Mon, 22 Feb 2021 21:42:31 +0700 Subject: [Starlingx-discuss] STX Data Networks In-Reply-To: References: Message-ID: On 2021-02-22 17:53, Shashi, Yatindra wrote: > Hi Danishka, > > Have you looked in the below link , I hope it clarifies to you. > > Try with flat/VLAN data network and using it with Openstack for > understanding. > > https://wiki.openstack.org/wiki/StarlingX/Networking > > Regards, > > Yatindra Shashi So... your claiming that this should be followed verbatim and all things applied to get openstack networking for vms up and running ? > > IoTG DE > > From: open infra > Sent: 20 February 2021 20:36 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] STX Data Networks > > Hi, > > I have a AIO simplex environment configured as per the starlingx r4 > documentation. > > Under data network topology I noticed that controller-0 without any > connections to available two networks. > > Is there documentation of how to create data networks of > starlingx/openstack? > > To be honest, setting up the networking of starlingx and openstack is > not clear. > > Regards, > > Danishka > > Intel Deutschland GmbH > Registered Address: Am Campeon 10, 85579 Neubiberg, Germany > Tel: +49 89 99 8853-0, www.intel.de [1] > Managing Directors: Christin Eisenschmid, Sharon Heck, Tiffany Doon > Silva > Chairperson of the Supervisory Board: Nicole Lau > Registered Office: Munich > Commercial Register: Amtsgericht Muenchen HRB 186928 > > > Links: > ------ > [1] http://www.intel.de > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From openinfradn at gmail.com Mon Feb 22 15:19:16 2021 From: openinfradn at gmail.com (open infra) Date: Mon, 22 Feb 2021 20:49:16 +0530 Subject: [Starlingx-discuss] STX Data Networks In-Reply-To: References: Message-ID: Hi Yatindra, On Mon, Feb 22, 2021 at 4:23 PM Shashi, Yatindra wrote: > Hi Danishka, > > > Have you looked in the below link , I hope it clarifies to you. > > Try with flat/VLAN data network and using it with Openstack for > understanding. > > https://wiki.openstack.org/wiki/StarlingX/Networking > No, I didn't go through this. AIO Simplex setup, I have where STX Data Network Topology showing, controler-0 has no connection to physnet0 and physnet1 I was referring the installation guide. But the https://docs.starlingx.io/deploy_install_guides/r4_release/virtual/aio_simplex_install_kubernetes.html Is that normal or I am I missing something? > > Regards, > > Yatindra Shashi > > IoTG DE > > > > *From:* open infra > *Sent:* 20 February 2021 20:36 > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] STX Data Networks > > > > Hi, > > > > I have a AIO simplex environment configured as per the starlingx r4 > documentation. > > Under data network topology I noticed that controller-0 without any > connections to available two networks. > > Is there documentation of how to create data networks of > starlingx/openstack? > > > > To be honest, setting up the networking of starlingx and openstack is not > clear. > > > > > > > > Regards, > > Danishka > > Intel Deutschland GmbH > Registered Address: Am Campeon 10, 85579 Neubiberg, Germany > Tel: +49 89 99 8853-0, www.intel.de > Managing Directors: Christin Eisenschmid, Sharon Heck, Tiffany Doon > Silva > Chairperson of the Supervisory Board: Nicole Lau > Registered Office: Munich > Commercial Register: Amtsgericht Muenchen HRB 186928 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Mon Feb 22 15:30:36 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Mon, 22 Feb 2021 15:30:36 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210220T023311Z Message-ID: Sanity Test from 2021-February-20 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210220T023311Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210220T023311Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From openinfradn at gmail.com Mon Feb 22 17:42:52 2021 From: openinfradn at gmail.com (open infra) Date: Mon, 22 Feb 2021 23:12:52 +0530 Subject: [Starlingx-discuss] STX supported/recommended hardware Message-ID: Hi, Is there a recommended hardware list or a list of tested (by community) hardware? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Feb 22 19:24:16 2021 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 22 Feb 2021 19:24:16 +0000 Subject: [Starlingx-discuss] STX supported/recommended hardware In-Reply-To: References: Message-ID: Hi Danishka. The StarlingX project documents hardware requirements in the installation guides. For instance this doc [1] describes the min hardware requirements for the Standard with Dedicated Storage configuration. Most testing of StarlingX occurs outside of community resources. My company has a test lab consisting of a range of systems that we use for testing. In general the software supports the amd64 architecture and I would recommend Xeon-D series processors or the equivalent as a minimum for controller and worker nodes. It’s really your budget and your workload requirements that determine how much capacity you need. You can see this link [2] for more info on planning a deployment. brucej [1] https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_hardware.html [2] https://docs.starlingx.io/planning/index.html From: open infra Sent: Monday, February 22, 2021 9:43 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX supported/recommended hardware Hi, Is there a recommended hardware list or a list of tested (by community) hardware? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steven.Webster at windriver.com Mon Feb 22 21:11:09 2021 From: Steven.Webster at windriver.com (Webster, Steven) Date: Mon, 22 Feb 2021 21:11:09 +0000 Subject: [Starlingx-discuss] FYI: Upcoming uprev of SR-IOV CNI image to stx.5.0-v2.6-7-gb18123d8 In-Reply-To: References: Message-ID: Hi Folks, Just a follow-up from my previous mail on 02/18/2021: The change to reference the new SR-IOV CNI image has been merged. https://opendev.org/starlingx/ansible-playbooks/commit/4f587423c0d81efc54e57a7f04325ca18a19b9c2 Cheers, Steve From: Webster, Steven Sent: Thursday, February 18, 2021 12:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] FYI: Upcoming uprev of SR-IOV CNI image to stx.5.0-v2.6-7-gb18123d8 Hi All, Just an FYI for those that are using a private image mirror or proxy. In the day or two, the SR-IOV CNI image will be up-versioned to pick up a few bug fixes. As part of this activity, the following image should be added to your mirror: docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8 Ref: https://review.opendev.org/c/starlingx/integ/+/776261 https://review.opendev.org/c/starlingx/root/+/776273 I will send another update to this list once the ansible-playbook repo is referencing the new image, but you are welcome to pull it in the meantime in preparation. There are no API / command changes required from a user of the CNI. Previous functionality has been verified and should work as before. Cheers, Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Feb 23 00:18:25 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 22 Feb 2021 19:18:25 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1570 - Failure! Message-ID: <1253974609.166.1614039506838.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1570 Status: Failure Timestamp: 20210223T001409Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20210223T000000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-compiler/20210223T000000Z DOCKER_BUILD_ID: jenkins-master-compiler-20210223T000000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-compiler/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20210223T000000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/compiler/20210223T000000Z/logs MASTER_JOB_NAME: STX_build_layer_compiler_master_master LAYER: compiler MY_REPO_ROOT: /localdisk/designer/jenkins/master-compiler PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/compiler From build.starlingx at gmail.com Tue Feb 23 02:47:33 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 22 Feb 2021 21:47:33 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1571 - Still Failing! In-Reply-To: <1372798406.164.1614039500845.JavaMail.javamailuser@localhost> References: <1372798406.164.1614039500845.JavaMail.javamailuser@localhost> Message-ID: <237956718.169.1614048453834.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1571 Status: Still Failing Timestamp: 20210223T020020Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210223T001804Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210223T001804Z DOCKER_BUILD_ID: jenkins-master-distro-20210223T001804Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210223T001804Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210223T001804Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/distro From openinfradn at gmail.com Tue Feb 23 09:24:14 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 23 Feb 2021 14:54:14 +0530 Subject: [Starlingx-discuss] STX supported/recommended hardware In-Reply-To: References: Message-ID: Hi Bruce, On Tue, Feb 23, 2021 at 12:54 AM Jones, Bruce E wrote: > Hi Danishka. The StarlingX project documents hardware requirements in the > installation guides. For instance this doc [1] describes the min hardware > requirements for the Standard with Dedicated Storage configuration. > > > > Most testing of StarlingX occurs outside of community resources. My > company has a test lab consisting of a range of systems that we use for > testing. In general the software supports the amd64 architecture and I > would recommend Xeon-D series processors or the equivalent as a minimum for > controller and worker nodes. > > > Is there such minimum spec for NICs to achieve better performance (low latency/less packet loss)? It’s really your budget and your workload requirements that determine how > much capacity you need. You can see this link [2] for more info on > planning a deployment. > > > > brucej > > > > [1] > https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_hardware.html > > [2] https://docs.starlingx.io/planning/index.html > > > > *From:* open infra > *Sent:* Monday, February 22, 2021 9:43 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] STX supported/recommended hardware > > > > Hi, > > > > Is there a recommended hardware list or a list of tested (by community) > hardware? > > > > Regards, > > Danishka > Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Tue Feb 23 09:54:32 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 23 Feb 2021 09:54:32 +0000 Subject: [Starlingx-discuss] cengn mirror server is down Message-ID: Hi guys, Looks like our Cengn mirror server(http://mirror.starlingx.cengn.ca/) is again offline. BR, Alex [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From lists at optimcloud.com Tue Feb 23 12:35:08 2021 From: lists at optimcloud.com (lists at optimcloud.com) Date: Tue, 23 Feb 2021 19:35:08 +0700 Subject: [Starlingx-discuss] STX Data Networks In-Reply-To: References: Message-ID: On 2021-02-22 17:53, Shashi, Yatindra wrote: > Hi Danishka, > > Have you looked in the below link , I hope it clarifies to you. > > Try with flat/VLAN data network and using it with Openstack for > understanding. > > https://wiki.openstack.org/wiki/StarlingX/Networking > > Regards, > > Yatindra Shashi > > IoTG DE > > From: open infra > Sent: 20 February 2021 20:36 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] STX Data Networks > > Hi, > > I have a AIO simplex environment configured as per the starlingx r4 > documentation. > > Under data network topology I noticed that controller-0 without any > connections to available two networks. > > Is there documentation of how to create data networks of > starlingx/openstack? > > To be honest, setting up the networking of starlingx and openstack is > not clear. > > Regards, > > Danishka ill agree totally setting up stx-openstack networking is not clear at all, and pointing us at this document even confuses me further So ill ask can someone provide a simple do this to get stx-openstack clients networking for an AIO bare metal because right now im stuck with Network Name Type MTU Segmentation Ranges Actions physnet0 vlan 1500 - physnet1 vlan 1500 - phy-flat flat 1500 - Displaying 3 items > > Intel Deutschland GmbH > Registered Address: Am Campeon 10, 85579 Neubiberg, Germany > Tel: +49 89 99 8853-0, www.intel.de [1] > Managing Directors: Christin Eisenschmid, Sharon Heck, Tiffany Doon > Silva > Chairperson of the Supervisory Board: Nicole Lau > Registered Office: Munich > Commercial Register: Amtsgericht Muenchen HRB 186928 > > > Links: > ------ > [1] http://www.intel.de > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Feb 23 13:57:13 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 23 Feb 2021 08:57:13 -0500 (EST) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_flock_images - Build # 336 - Failure! Message-ID: <752430897.173.1614088634986.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images Build #: 336 Status: Failure Timestamp: 20210223T043013Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210223T034658Z/logs -------------------------------------------------------------------------------- Parameters WEB_HOST: mirror.starlingx.cengn.ca MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210223T034658Z OS: centos MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root BASE_VERSION: master-stable-20210223T034658Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210223T034658Z/logs REGISTRY_USERID: slittlewrs LATEST_PREFIX: master PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210223T034658Z/logs PUBLISH_TIMESTAMP: 20210223T034658Z FLOCK_VERSION: master-centos-stable-20210223T034658Z WEB_HOST_PORT: 80 PREFIX: master TIMESTAMP: 20210223T034658Z BUILD_STREAM: stable REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210223T034658Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Tue Feb 23 13:57:16 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 23 Feb 2021 08:57:16 -0500 (EST) Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images_layered - Build # 93 - Failure! Message-ID: <1944105004.176.1614088637890.JavaMail.javamailuser@localhost> Project: STX_build_docker_images_layered Build #: 93 Status: Failure Timestamp: 20210223T041012Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210223T034658Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210223T034658Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210223T034658Z/logs MASTER_BUILD_NUMBER: 117 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210223T034658Z/logs MASTER_JOB_NAME: STX_build_layer_containers_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers PUBLISH_TIMESTAMP: 20210223T034658Z DOCKER_BUILD_ID: jenkins-master-containers-20210223T034658Z-builder TIMESTAMP: 20210223T034658Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210223T034658Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210223T034658Z/outputs From build.starlingx at gmail.com Tue Feb 23 13:57:19 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 23 Feb 2021 08:57:19 -0500 (EST) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 117 - Failure! Message-ID: <1510560586.179.1614088640027.JavaMail.javamailuser@localhost> Project: STX_build_layer_containers_master_master Build #: 117 Status: Failure Timestamp: 20210223T034658Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210223T034658Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false From build.starlingx at gmail.com Tue Feb 23 14:43:10 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 23 Feb 2021 09:43:10 -0500 (EST) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 118 - Still Failing! In-Reply-To: <468986987.177.1614088638500.JavaMail.javamailuser@localhost> References: <468986987.177.1614088638500.JavaMail.javamailuser@localhost> Message-ID: <723225359.183.1614091390704.JavaMail.javamailuser@localhost> Project: STX_build_layer_containers_master_master Build #: 118 Status: Still Failing Timestamp: 20210223T144046Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210223T144046Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false From scott.little at windriver.com Tue Feb 23 15:30:12 2021 From: scott.little at windriver.com (Scott Little) Date: Tue, 23 Feb 2021 10:30:12 -0500 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 117 - Failure! In-Reply-To: <1510560586.179.1614088640027.JavaMail.javamailuser@localhost> References: <1510560586.179.1614088640027.JavaMail.javamailuser@localhost> Message-ID: Build failed due to a hardware failure in the cengn firewall, and the backup failed to take over correctly. They are back online, and a new build has been started. Scott On 2021-02-23 8:57 a.m., build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_layer_containers_master_master > Build #: 117 > Status: Failure > Timestamp: 20210223T034658Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210223T034658Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Tue Feb 23 15:42:44 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 23 Feb 2021 21:12:44 +0530 Subject: [Starlingx-discuss] cengn mirror server is down In-Reply-To: References: Message-ID: Can we use sf.net as an alternative? On Tue, Feb 23, 2021 at 3:28 PM Dimofte, Alexandru < alexandru.dimofte at intel.com> wrote: > Hi guys, > > > > Looks like our Cengn mirror server(http://mirror.starlingx.cengn.ca/) is > again offline. > > > > BR, > > Alex > > > > [image: Logo Description automatically generated] > > Dimofte Alexandru > > Software Engineer > > STARLINGX TEAM > > Skype no: +40 336403734 > > Personal Mobile: +40 743167456 > > alexandru.dimofte at intel.com > > Intel Romania > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: not available URL: From Venkata.Veldanda at radisys.com Tue Feb 23 15:53:25 2021 From: Venkata.Veldanda at radisys.com (Venkata Ramana Veldanda) Date: Tue, 23 Feb 2021 15:53:25 +0000 Subject: [Starlingx-discuss] starlingx: unable to delete images from local registry In-Reply-To: References: Message-ID: Hi, We are using STX4.0 We have been trying to delete the images from the registry and it doesn't appear to be deleting. The "registry-image-delete" returns success but couldn't figure what is going wrong. Are there corresponding logs where the delete progress can be checked? Also, is there a workaround to delete them? controller-0:/home/sysadmin# source /etc/platform/openrc [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# system registry-image-list --nowrap +----------------------------------------------------------------+ | Image Name | +----------------------------------------------------------------+ | docker.io/nfvpe/multus | | docker.io/starlingx/k8s-cni-sriov | | docker.io/starlingx/k8s-plugins-sriov-network-device | | docker.io/starlingx/n3000-opae | | gcr.io/kubernetes-helm/tiller | | k8s.gcr.io/coredns | | k8s.gcr.io/defaultbackend | | k8s.gcr.io/etcd | | k8s.gcr.io/kube-apiserver | | k8s.gcr.io/kube-controller-manager | | k8s.gcr.io/kube-proxy | | k8s.gcr.io/kube-scheduler | | k8s.gcr.io/pause | | quay.io/airshipit/armada | | quay.io/calico/cni | | quay.io/calico/kube-controllers | | quay.io/calico/node | | quay.io/calico/pod2daemon-flexvol | | quay.io/jetstack/cert-manager-acmesolver | | quay.io/jetstack/cert-manager-cainjector | | quay.io/jetstack/cert-manager-controller | | quay.io/jetstack/cert-manager-webhook | | quay.io/k8scsi/snapshot-controller | | quay.io/kubernetes-ingress-controller/nginx-ingress-controller | | quay.io/stackanetes/kubernetes-entrypoint | | vdu/du-l1 | | vdu/du-testmac | | vdu/dul1 | +----------------------------------------------------------------+ [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# system registry-image-delete vdu/du-testmac:v1 Image vdu/du-testmac:v1 deleted, please run garbage collect to free disk space. [root at controller-0 sysadmin(keystone_admin)]# system registry-image-delete vdu/du-l1:v1 Image vdu/du-l1:v1 deleted, please run garbage collect to free disk space. [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# [root at controller-0 sysadmin(keystone_admin)]# system registry-image-list --nowrap +----------------------------------------------------------------+ | Image Name | +----------------------------------------------------------------+ | docker.io/nfvpe/multus | | docker.io/starlingx/k8s-cni-sriov | | docker.io/starlingx/k8s-plugins-sriov-network-device | | docker.io/starlingx/n3000-opae | | gcr.io/kubernetes-helm/tiller | | k8s.gcr.io/coredns | | k8s.gcr.io/defaultbackend | | k8s.gcr.io/etcd | | k8s.gcr.io/kube-apiserver | | k8s.gcr.io/kube-controller-manager | | k8s.gcr.io/kube-proxy | | k8s.gcr.io/kube-scheduler | | k8s.gcr.io/pause | | quay.io/airshipit/armada | | quay.io/calico/cni | | quay.io/calico/kube-controllers | | quay.io/calico/node | | quay.io/calico/pod2daemon-flexvol | | quay.io/jetstack/cert-manager-acmesolver | | quay.io/jetstack/cert-manager-cainjector | | quay.io/jetstack/cert-manager-controller | | quay.io/jetstack/cert-manager-webhook | | quay.io/k8scsi/snapshot-controller | | quay.io/kubernetes-ingress-controller/nginx-ingress-controller | | quay.io/stackanetes/kubernetes-entrypoint | | vdu/du-l1 | | vdu/du-testmac | | vdu/dul1 | +----------------------------------------------------------------+ [root at controller-0 sysadmin(keystone_admin)]# system registry-image-delete vdu/dul1:v1 Image vdu/dul1:v1 deleted, please run garbage collect to free disk space. [root at controller-0 sysadmin(keystone_admin)]# system registry-garbage-collect Running docker registry garbage collect [root at controller-0 sysadmin(keystone_admin)]# system registry-image-list --nowrap +----------------------------------------------------------------+ | Image Name | +----------------------------------------------------------------+ | docker.io/nfvpe/multus | | docker.io/starlingx/k8s-cni-sriov | | docker.io/starlingx/k8s-plugins-sriov-network-device | | docker.io/starlingx/n3000-opae | | gcr.io/kubernetes-helm/tiller | | k8s.gcr.io/coredns | | k8s.gcr.io/defaultbackend | | k8s.gcr.io/etcd | | k8s.gcr.io/kube-apiserver | | k8s.gcr.io/kube-controller-manager | | k8s.gcr.io/kube-proxy | | k8s.gcr.io/kube-scheduler | | k8s.gcr.io/pause | | quay.io/airshipit/armada | | quay.io/calico/cni | | quay.io/calico/kube-controllers | | quay.io/calico/node | | quay.io/calico/pod2daemon-flexvol | | quay.io/jetstack/cert-manager-acmesolver | | quay.io/jetstack/cert-manager-cainjector | | quay.io/jetstack/cert-manager-controller | | quay.io/jetstack/cert-manager-webhook | | quay.io/k8scsi/snapshot-controller | | quay.io/kubernetes-ingress-controller/nginx-ingress-controller | | quay.io/stackanetes/kubernetes-entrypoint | | vdu/du-l1 | | vdu/du-testmac | | vdu/dul1 | +----------------------------------------------------------------+ [root at controller-0 sysadmin(keystone_admin)]# system registry-image-tags vdu/dul1 [root at controller-0 sysadmin(keystone_admin)]# system registry-image-tags vdu/du-testmac [root at controller-0 sysadmin(keystone_admin)]# system registry-image-tags vdu/du-l1 [root at controller-0 sysadmin(keystone_admin)]# system registry-image-list +----------------------------------------------------------------+ | Image Name | +----------------------------------------------------------------+ | docker.io/nfvpe/multus | | docker.io/starlingx/k8s-cni-sriov | | docker.io/starlingx/k8s-plugins-sriov-network-device | | docker.io/starlingx/n3000-opae | | gcr.io/kubernetes-helm/tiller | | k8s.gcr.io/coredns | | k8s.gcr.io/defaultbackend | | k8s.gcr.io/etcd | | k8s.gcr.io/kube-apiserver | | k8s.gcr.io/kube-controller-manager | | k8s.gcr.io/kube-proxy | | k8s.gcr.io/kube-scheduler | | k8s.gcr.io/pause | | quay.io/airshipit/armada | | quay.io/calico/cni | | quay.io/calico/kube-controllers | | quay.io/calico/node | | quay.io/calico/pod2daemon-flexvol | | quay.io/jetstack/cert-manager-acmesolver | | quay.io/jetstack/cert-manager-cainjector | | quay.io/jetstack/cert-manager-controller | | quay.io/jetstack/cert-manager-webhook | | quay.io/k8scsi/snapshot-controller | | quay.io/kubernetes-ingress-controller/nginx-ingress-controller | | quay.io/stackanetes/kubernetes-entrypoint | | vdu/du-l1 | | vdu/du-testmac | | vdu/dul1 | +----------------------------------------------------------------+ [root at controller-0 sysadmin(keystone_admin)]# -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Feb 23 16:38:40 2021 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 23 Feb 2021 16:38:40 +0000 Subject: [Starlingx-discuss] STX supported/recommended hardware In-Reply-To: References: Message-ID: > Is there such minimum spec for NICs to achieve better performance (low latency/less packet loss)? No, not really. We have this [1] but that is more about network setup than NIC requirements. For small clusters 1g NICs should be fine but 10g might be needed for larger clusters or network intensive workloads. StarlingX supports DPDK and Time Sensitive Networking, both of which significantly reduce packet latency. Brucej [1] https://docs.starlingx.io/planning/kubernetes/network-requirements.html From: open infra Sent: Tuesday, February 23, 2021 1:24 AM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX supported/recommended hardware Hi Bruce, On Tue, Feb 23, 2021 at 12:54 AM Jones, Bruce E > wrote: Hi Danishka. The StarlingX project documents hardware requirements in the installation guides. For instance this doc [1] describes the min hardware requirements for the Standard with Dedicated Storage configuration. Most testing of StarlingX occurs outside of community resources. My company has a test lab consisting of a range of systems that we use for testing. In general the software supports the amd64 architecture and I would recommend Xeon-D series processors or the equivalent as a minimum for controller and worker nodes. Is there such minimum spec for NICs to achieve better performance (low latency/less packet loss)? It’s really your budget and your workload requirements that determine how much capacity you need. You can see this link [2] for more info on planning a deployment. brucej [1] https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage_hardware.html [2] https://docs.starlingx.io/planning/index.html From: open infra > Sent: Monday, February 22, 2021 9:43 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX supported/recommended hardware Hi, Is there a recommended hardware list or a list of tested (by community) hardware? Regards, Danishka Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Saul.Wold at windriver.com Tue Feb 23 21:43:27 2021 From: Saul.Wold at windriver.com (Saul Wold) Date: Tue, 23 Feb 2021 13:43:27 -0800 Subject: [Starlingx-discuss] Merging Multi-OS & Distro Meeting Message-ID: <8f7e3f50-15a9-3b18-95ea-cebcfd536cd9@windriver.com> Folks, I would like to try and reduce our meeting load by one and ideally at a more reasonable time. I would like to propose using the Wed @ 1430 UTC (0630 PT / 2230 China) when Canada/US "spring forward" on March 14th the China time will move to 2130. So there will be 1 late meeting and then shift. Please respond if are part of either of these teams and this time does not work. Another option would be the current Multi-OS time slot on Tuesday at 1530 UTC, but that would mean a 2330 China time shifting after the 14th to 2230. Ildiko, can you confirm that the Wed @ 1430 is available every other week opposite the Release Team meeting (so staring March 4th). Thanks -- Sau! From lists at optimcloud.com Wed Feb 24 11:18:48 2021 From: lists at optimcloud.com (lists at optimcloud.com) Date: Wed, 24 Feb 2021 18:18:48 +0700 Subject: [Starlingx-discuss] STX Data Networks In-Reply-To: References: Message-ID: <7093e4a2c044a39789dc986e3e1a8391@optimcloud.com> On 2021-02-22 22:19, open infra wrote: > Hi Yatindra, > okay all... HEELLLPPPPP...! https://pasteboard.co/JPPnzXz.png stx-openstack struggle...... From Bill.Zvonar at windriver.com Wed Feb 24 13:49:10 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 24 Feb 2021 13:49:10 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Feb 24, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community calls coming up later today. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210224T1500 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Scott.Little at windriver.com Wed Feb 24 14:47:39 2021 From: Scott.Little at windriver.com (Little, Scott) Date: Wed, 24 Feb 2021 14:47:39 +0000 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 117 - Failure! In-Reply-To: References: <1510560586.179.1614088640027.JavaMail.javamailuser@localhost>, Message-ID: The rebuild was successful Scott ________________________________ From: Scott Little Sent: February 23, 2021 10:30 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 117 - Failure! Build failed due to a hardware failure in the cengn firewall, and the backup failed to take over correctly. They are back online, and a new build has been started. Scott On 2021-02-23 8:57 a.m., build.starlingx at gmail.com wrote: [Please note: This e-mail is from an EXTERNAL e-mail address] Project: STX_build_layer_containers_master_master Build #: 117 Status: Failure Timestamp: 20210223T034658Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210223T034658Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Feb 24 15:56:25 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 24 Feb 2021 15:56:25 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Feb 24, 2021) In-Reply-To: References: Message-ID: >From today's meeting: * Standing Topics * Build/Sanity * was still red until Monday, no sanity report since then - CENGN issues? * per Nicolae, on their 2nd setup, all sanities pass now, including Standard on Bare Metal * per Scott, there was a CENGN issue impacting container build on Monday, but it got sorted out (ripples from power outage a few weeks ago) * Gerrit Reviews in Need of Attention * nothing this week * Topics for this Week * Turning some of the hands-on workshop materials into a Getting Started guide? (ildikov) * AR: Greg will look for these materials - we'll have to assess how out of date they are * How to get more people from the community on IRC? (ildikov) * everyone's encouraged to hang out on IRC, and to let Ildiko know if they've got any barriers to getting on IRC * Merging OS-Distro & Multi-OS meeting (Saul) * these will be merged, at 9:30 EST Wednesday (bi-weekly alternating with the Release Team meetings) * Update on CentOS alternatives - new multi-os project (Saul) * Saul's working on a specification that will be shared with the community for review * work will begin in starlingx-staging * ARs from Previous Meetings * no updates this week * Open Requests for Help * STX Data Networks issue * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010886.html * https://pasteboard.co/JPPnzXz.png * per Greg * see https://docs.starlingx.io/datanet/data-networks-overview.html * and seems to be related to the request above re: getting started guide & need/desire for information/guides after successful installation of the system * Ildiko concurred with this * Greg will respond to the email on the mailing list * unable to delete images from local registry * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010906.html * per Greg, there was a known issue with this at one point, Bill will follow up * k8s pods cant use application-isolated cpus * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010882.html * Frank to follow up * incomplete openstack horizon ui * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010871.html * Bill will ask Nicolae to follow up - maybe they're just on the wrong Horizon? * Adding IoT devices to StarlingX Subcloud as K8s worker nodes * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010857.html * Austin will review & respond * failed to run Kubelet: invalid configuration: cgroup-root ["k8s-infra"] doesn't exist * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-February/010849.html * Austin will review & respond * Build Matters (if required) * nothing this week -----Original Message----- From: Zvonar, Bill Sent: Wednesday, February 24, 2021 8:49 AM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (Feb 24, 2021) Hi all, reminder of the weekly TSC/Community calls coming up later today. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210224T1500 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Steven.Webster at windriver.com Wed Feb 24 15:59:40 2021 From: Steven.Webster at windriver.com (Webster, Steven) Date: Wed, 24 Feb 2021 15:59:40 +0000 Subject: [Starlingx-discuss] STX Data Networks In-Reply-To: <7093e4a2c044a39789dc986e3e1a8391@optimcloud.com> References: <7093e4a2c044a39789dc986e3e1a8391@optimcloud.com> Message-ID: Would I be able to get the output of (I think you are on AIO-SX): system interface-datanetwork-list controller-0 That should have entries added when the following was configured system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} From the guide: https://docs.starlingx.io/deploy_install_guides/r4_release/virtual/aio_simplex_install_kubernetes.html You can also read more about it here: https://docs.starlingx.io/configuration/host_interface_network_config.html Cheers, Steve > -----Original Message----- > From: lists at optimcloud.com > Sent: Wednesday, February 24, 2021 6:19 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] STX Data Networks > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > On 2021-02-22 22:19, open infra wrote: > > Hi Yatindra, > > > > okay all... HEELLLPPPPP...! > > https://pasteboard.co/JPPnzXz.png > > stx-openstack struggle...... > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From OwenYuen at cmail.carleton.ca Wed Feb 24 16:04:54 2021 From: OwenYuen at cmail.carleton.ca (Owen Yuen) Date: Wed, 24 Feb 2021 16:04:54 +0000 Subject: [Starlingx-discuss] How to deploy workloads from StarlingX GUI/CLI In-Reply-To: References: , , Message-ID: Thank you for the prompt response Qi, My other question is about failing over workloads across multiple subclouds. Is this a feature supported? Thanks Owen From: Qi, Mingyuan Sent: Sunday, February 21, 2021 8:55 PM To: Owen Yuen; starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut; Aidan Seguin-McPeake Subject: RE: How to deploy workloads from StarlingX GUI/CLI [External Email] Owen, Several enhancements: 1. Multiple helm charts deployment at a time with dependency check. 2. Plugin supported to automatically generate system related overrides from sysinv api. 3. Re-generate/re-check overrides when system status is changing. 4. Override operation cmd ‘system helm-override-xxx’. If your application does not need to get the system level information, nor has complicated helm chart dependencies, you can use helm chart as usual. Mingyuan From: Owen Yuen Sent: Thursday, February 18, 2021 5:26 To: Qi, Mingyuan ; starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut ; Aidan Seguin-McPeake Subject: RE: How to deploy workloads from StarlingX GUI/CLI Thanks Mingyuan, A follow up question: what is the benefit of deploying an application with ‘system application’ over directly from helm? Thanks again Owen From: Qi, Mingyuan Sent: Tuesday, February 9, 2021 2:20 AM To: Owen Yuen; starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut; Aidan Seguin-McPeake Subject: RE: How to deploy workloads from StarlingX GUI/CLI [External Email] Hi Owen, You could create an armada application with app-gen-tool[0] and apply it by ‘system application’ CLI. There is no panel in StarlingX dashboard GUI to manage applications so far. [0] https://opendev.org/starlingx/tools/src/branch/master/app-gen-tool Mingyuan From: Owen Yuen > Sent: Tuesday, February 9, 2021 10:25 To: starlingx-discuss at lists.starlingx.io Cc: Thomas Yungblut >; Aidan Seguin-McPeake > Subject: [Starlingx-discuss] How to deploy workloads from StarlingX GUI/CLI Is it possible to deploy a workload via STX instead of via kubectl or kubernetes GUI directly? We are running an distributed cloud AIO duplex setup so our worker hosts are on con0 and 1 if that makes a difference. Also, how does StarlingX manage workloads from the GUI? Any help would be greatly appreciated. Thanks Owen This email contains links to content or websites. Always be cautious when clicking on external links or attachments. If in doubt, please forward suspicious emails to phishing at carleton.ca. -----End of Disclaimer----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Wed Feb 24 16:13:55 2021 From: lists at optimcloud.com (lists at optimcloud.com) Date: Wed, 24 Feb 2021 23:13:55 +0700 Subject: [Starlingx-discuss] STX Data Networks In-Reply-To: References: <7093e4a2c044a39789dc986e3e1a8391@optimcloud.com> Message-ID: <4f268c0cbc582c5decfd31a6d4948133@optimcloud.com> On 2021-02-24 22:59, Webster, Steven wrote: > Would I be able to get the output of (I think you are on AIO-SX): > > system interface-datanetwork-list controller-0 sure... probably not what your expect though after someone recommended some docs and that was a fail, hence the reason im stuck system interface-datanetwork-list controller-0 +--------------+--------------------------------------+--------+------------------+ | hostname | uuid | ifname | datanetwork_name | +--------------+--------------------------------------+--------+------------------+ | controller-0 | 9b3720ec-eeb8-4c0b-b19b-44f3e838e91b | data1 | physnet1 | | controller-0 | c7e23c08-41ab-42bc-8f8e-d0980c57a9d3 | data0 | phy-flat | +--------------+--------------------------------------+--------+------------------+ > > That should have entries added when the following was configured > > system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} > system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} > > From the guide: > > https://docs.starlingx.io/deploy_install_guides/r4_release/virtual/aio_simplex_install_kubernetes.html > > You can also read more about it here: > > https://docs.starlingx.io/configuration/host_interface_network_config.html > > Cheers, > > Steve > >> -----Original Message----- >> From: lists at optimcloud.com >> Sent: Wednesday, February 24, 2021 6:19 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] STX Data Networks >> >> [Please note: This e-mail is from an EXTERNAL e-mail address] >> >> On 2021-02-22 22:19, open infra wrote: >> > Hi Yatindra, >> > >> >> okay all... HEELLLPPPPP...! >> >> https://pasteboard.co/JPPnzXz.png >> >> stx-openstack struggle...... >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From nicolae.jascanu at intel.com Wed Feb 24 16:51:16 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Wed, 24 Feb 2021 16:51:16 +0000 Subject: [Starlingx-discuss] incomplete openstack horizon ui In-Reply-To: <5f2d8946b0d908c1a38aca5cf11a461c@optimcloud.com> References: <5f2d8946b0d908c1a38aca5cf11a461c@optimcloud.com> Message-ID: Hi, Looks like you are using the StarlingX dashboard at port :8080 You can use the Openstack dashboard for launching instances at port :31000 Regards, Nicolae Jascanu, Ph.D. Software Engineer IOTG Galati, Romania -----Original Message----- From: lists at optimcloud.com Sent: Thursday, February 18, 2021 17:02 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] incomplete openstack horizon ui deployed stx aio on bare-metal and deployed openstack, says its running i can login to os horizon, yet under platform i dont see the normal links for launching instances under platfrm all its lists is Platform Software Management Host Inventory Data Networks Data Network Topology Storage Overview System Configuration soooo what did i miss..... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Wed Feb 24 17:47:58 2021 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 24 Feb 2021 11:47:58 -0600 Subject: [Starlingx-discuss] k8s pods cant use application-isolated cpus In-Reply-To: References: Message-ID: <1e65850f-fb64-e5eb-dcd5-b8399e4e6b88@windriver.com> Upstream Kubernetes doesn't really know how to handle "isolated" CPUs separately from "normal" CPUs. Because of this sysinv will reserve the "application-isolated" CPUs from use by Kubernetes when using the "static" CPU Manager policy. If you set the kube-cpu-mgr-policyhost label to "none" (via Horizon or the "sysinv" CLI) Kubernetes will use the default policy and should include the "application-isolated" CPUs as part of the "Allocatable" cpu count.  You'll have to handle CPU affinity of containerized processes explicitly though to ensure that they don't interfere with each other. Commercial products based on StarlingX have support for isolated CPUs with the "static" CPU Manager policy by modifying upstream Kubernetes. Hope that helps, Chris On 2/18/2021 12:00 PM, Dharwadkar, Sriram wrote: > > Hi, > > We are using Distributed-StarlingX-4.0. > > Our use case requires pods to use isolated cpu’s, so executed the > > “system host-cpu-modify -f application-isolated -p0 20 controller-0” > command to isolate the cpus. > > system host-cpu-list controller-1 > > +--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+----------------------+ > > | uuid                                 | log_core | processor | > phy_core | thread | processor_model                           | > assigned_function    | > > +--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+----------------------+ > > | e64e0004-3096-40c7-804c-d9c79d2ceb17 | 0        | 0         | > 0        | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Platform             | > > | 28697ba9-c5e5-4d9b-9d97-346ec61c1fa7 | 1        | 0         | > 1        | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Platform             | > > | 1b111e2e-84e4-4414-870f-73c759ecbac0 | 2        | 0         | > 2        | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 66933896-5c9f-4e0a-9527-72025172e1f6 | 3        | 0         | > 3        | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 027258c8-392e-4497-9e46-6acb007e759f | 4        | 0         | > 4        | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 5d88a23a-68de-4062-9313-173fd91a4bc5 | 5        | 0         | > 5        | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | dd804bbb-191c-4499-bda1-272f9bb923af | 6        | 0         | > 6        | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 7ae2a90b-59ca-4af0-b20e-20635af0aceb | 7        | 0         | > 8        | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | ada68f62-4d06-43eb-8ed9-cd6ec6d33487 | 8        | 0         | > 9        | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 8c2612fd-8a7f-4732-82ff-fb4717ab9275 | 9        | 0         | > 10       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 01412e4d-c9ff-41e8-8fd3-c2a0d27ac7a3 | 10       | 0         | > 11       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | c8e8b15b-538b-422e-9d72-2ba96982054a | 11       | 0         | > 12       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | b2efd166-3125-46b9-95ac-4fb00a0de3c7 | 12       | 0         | > 13       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | c33f3dd7-0b06-40c3-93b3-82f065c6db05 | 13       | 0         | > 16       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | e33cda3e-7162-45d4-af79-4dff7f0bb07f | 14       | 0         | > 17       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 551b7272-371e-4c05-b5c8-7708f95d3a97 | 15       | 0         | > 18       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | d0522e8f-ebd4-49fd-8a0e-26de857f2caf | 16       | 0         | > 19       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 84dd71f2-ad7a-46bb-8ae2-e62b2a817cc5 | 17       | 0         | > 20       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | f4761605-3208-4758-b31c-d679cb0b631e | 18       | 0         | > 21       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 66423b6f-134b-4166-bdff-9643d4ec8d4d | 19       | 0         | > 25       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 45d38571-ca97-4b89-b442-fe5f5282bc5f | 20       | 0         | > 26       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | fc87bbd5-2954-4730-9f67-b8fe8820faa7 | 21       | 0         | > 27       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application-isolated | > > | 6b3a8a78-d191-47fa-af82-15166a7924fc | 22       | 0         | > 28       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application          | > > | 57739379-ab8f-4d3d-847a-f5aa9663bcfb | 23       | 0         | > 29       | 0      | Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz | > Application          | > > +--------------------------------------+----------+-----------+----------+--------+-------------------------------------------+----------------------+ > > But the k8s is not able to allocate isolated cpus to pods. Allocatable > cpus are only 2. > > controller-0:/home/sysadmin# kubectl get node controller-0 -o json | > jq '.status.allocatable' > > { > > *  "cpu": "2",* > >   "ephemeral-storage": "9391196145", > >   "hugepages-1Gi": "20Gi", > >   "hugepages-2Mi": "0", > >   "intel.com/pci_sriov_net_physnet0": "32", > >   "memory": "70207428Ki", > >   "pods": "110" > > } > > I saw this bug report, which seems to be talking about the same issue. > > https://bugs.launchpad.net/starlingx/+bug/1894173 > > > Kindly let me know when the fix is planned. > > If there is any work around to solve this issue, do let me know. > > Regards, > > Sirram > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sriram.Dharwadkar at commscope.com Wed Feb 24 17:59:56 2021 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Wed, 24 Feb 2021 17:59:56 +0000 Subject: [Starlingx-discuss] Upgrade MLNX-OFED to 5.2-2.2.0.0 in StarlingX-4.0 to latest In-Reply-To: References: Message-ID: Hi, Please let me know if there is a procedure to upgrade OFED package. We are blocked in this, let me know if there is any solution possible. Regards, Sriram From: Dharwadkar, Sriram Sent: Monday, February 15, 2021 9:34 AM To: starlingx-discuss at lists.starlingx.io Subject: RE: Upgrade MLNX-OFED to 5.2-2.2.0.0 in StarlingX-4.0 to latest Does StarlingX support of MLNX-OFED packages ? Please let me know if there is a procedure to upgrade OFED package. Regards, Sriram From: Dharwadkar, Sriram Sent: Tuesday, February 9, 2021 11:48 PM To: starlingx-discuss at lists.starlingx.io Subject: Upgrade MLNX-OFED to 5.2-2.2.0.0 in StarlingX-4.0 to latest Hi, I have installed distributed StarlingX 4.0. We are facing one issue wrt MLNX-OFED version. In the ConnectX-4 EN Nic that we are using in our platform, we see some issue related to spoof check parameter. In Kubernetes environment, after pod restart, if pod attaches to the same VF that it was using previously, spoof check becomes ON automatically and the traffic stops going out of that VF. To solve that issue, our hardware vendor has suggested to the upgrade of MLNX-OFED(5.2-2.2.0.0) and Firmware upragde(MFT 4.16.1). In starlingX environment, I tried doing # ./install.sh --oem -E- There are missing packages that are required for installation of MFT. -I- You can install missing packages using: yum install gcc rpm-build kernel-devel-4.18.0-147.3.1.rt24.96.el8_1.tis.8.x86_64 I could install gcc and kernel-devel-4.18.0-147.3.1.rt24.96.el8_1.tis.8.x86_64, but rpm-build installation is not going through because of some dependency. How do we go about upgrading these packages ? Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Wed Feb 24 18:19:47 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 24 Feb 2021 18:19:47 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210223T024713Z Message-ID: Sanity Test from 2021-February-23 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210223T024713Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210223T024713Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From lists at optimcloud.com Wed Feb 24 18:26:41 2021 From: lists at optimcloud.com (lists at optimcloud.com) Date: Thu, 25 Feb 2021 01:26:41 +0700 Subject: [Starlingx-discuss] incomplete openstack horizon ui In-Reply-To: References: <5f2d8946b0d908c1a38aca5cf11a461c@optimcloud.com> Message-ID: <92a64a61eff477764d820fc9e0e5bc47@optimcloud.com> On 2021-02-24 23:51, Jascanu, Nicolae wrote: > Hi, > Looks like you are using the StarlingX dashboard at port :8080 > You can use the Openstack dashboard for launching instances at port > :31000 > > yupp only it says unable to connect in the browser... guess its time to run system application-apply stx-openstack again though it says its all running and deployed > Regards, > Nicolae Jascanu, Ph.D. > Software Engineer > IOTG > Galati, Romania > > > -----Original Message----- > From: lists at optimcloud.com > Sent: Thursday, February 18, 2021 17:02 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] incomplete openstack horizon ui > > deployed stx aio on bare-metal and deployed openstack, says its > running i can login to os horizon, yet under platform i dont see the > normal links for launching instances > > under platfrm all its lists is > > Platform > Software Management > Host Inventory > Data Networks > Data Network Topology > Storage Overview > System Configuration > > soooo what did i miss..... > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maryx.camp at intel.com Wed Feb 24 21:34:58 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 24 Feb 2021 21:34:58 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 24-Feb-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 24-Feb-21 All -- reviews merged since last meeting: 4 All -- bug status -- 17 total - team agrees to defer all low priority LP until the upstreaming effort is completed. 13 LP are WIP against API documentation, which is generated from source code (low priority). Those reviews are here: https://review.opendev.org/#/q/project:starlingx/config New on 10Feb21: Documentation needs to be updated for LAG and VLAN type interfaces [https://bugs.launchpad.net/starlingx/+bug/1915285] Status/questions/opens Juanita asked about PTP support? We suggested to ping Greg about this - we think maybe already supported on StarlingX. Greg's email about reorganizing guides on main docs landing page - is it time to also remove older STX content that is duplicated in upstream guides? Ron, Juanita, and Greg have a regular meeting on Friday. They will discuss and come up with a plan. May set up a separate meeting just for this discussion. Possibly reorganize content without deleting duplicated info as a first step? Mary started a review for R5 release notes as discussed at last week's meeting. https://review.opendev.org/c/starlingx/docs/+/777433 I will add it to the agenda for the next release meeting in 2 weeks. Update on the version dropdown implementation from Kevin: he plans to submit a review next week. First review will be to implement the theme changes, then will add the version picker changes in a later review. From alexandru.dimofte at intel.com Thu Feb 25 11:12:43 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 25 Feb 2021 11:12:43 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210224T023311Z Message-ID: Sanity Test from 2021-February-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210224T023311Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210224T023311Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From ildiko.vancsa at gmail.com Thu Feb 25 14:39:52 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 25 Feb 2021 15:39:52 +0100 Subject: [Starlingx-discuss] Tuesday Packet meeting - Removed from meeting calendar In-Reply-To: <976BC1C5-CA1A-4AF9-9F76-936943AB7174@gmail.com> References: <702790BB-8188-4A07-B84D-FCBDBB84262B@gmail.com> <976BC1C5-CA1A-4AF9-9F76-936943AB7174@gmail.com> Message-ID: <51E931F8-8F1C-4F81-8D17-F4493E4AE868@gmail.com> Hi, I just wanted to give a quick heads up that as no one was indicating that the Packet meeting is still running I removed it from the meeting wiki. I also have another call scheduled to run at that time from my account. Thanks, Ildikó > On Feb 16, 2021, at 16:32, Ildiko Vancsa wrote: > > Cool, I’m happy that you confirmed that. > > Is there any demo activity running on that hardware that could be documented as a blog post maybe even with a demo video (screen cap or smth)? It would be great to get some new content for the blog, so I was wondering if this might be a good source. > > Thanks, > Ildikó > > >> On Feb 16, 2021, at 16:19, Waines, Greg wrote: >> >> We do still use Packet testbed for some starlingx demos. >> Greg. >> >> From: Ildiko Vancsa >> Date: Tuesday, February 16, 2021 at 8:57 AM >> To: Greg Waines >> Cc: StarlingX ML >> Subject: Re: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? >> >> [Please note: This e-mail is from an EXTERNAL e-mail address] >> >> Hi Greg, >> >> Cool, thanks for confirming. >> >> BTW, are you or anyone from the project still using the Packet testbed to put together demos, etc? >> >> Thanks, >> Ildikó >> >> >>> On Feb 16, 2021, at 12:37, Waines, Greg wrote: >>> >>> I am not aware of this meeting still happening. >>> Greg. >>> >>> From: Ildiko Vancsa >>> Date: Monday, February 15, 2021 at 9:34 AM >>> To: StarlingX ML >>> Subject: [Starlingx-discuss] Tuesday Packet meeting - Is it still running? >>> >>> [Please note: This e-mail is from an EXTERNAL e-mail address] >>> >>> Hi, >>> >>> I recognized that the meeting wiki still contains the Packet SIG calls for Tuesdays at 10am PST. Are those calls still happening or is there any other StarlingX team call at that time? >>> >>> I’m asking both to keep the wiki up to date as well as to see if the Zoom account is available at that time. >>> >>> Thanks, >>> Ildikó >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From build.starlingx at gmail.com Thu Feb 25 21:28:32 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 25 Feb 2021 16:28:32 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1579 - Failure! Message-ID: <574852986.2.1614288514353.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1579 Status: Failure Timestamp: 20210225T212414Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20210225T211019Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-compiler/20210225T211019Z DOCKER_BUILD_ID: jenkins-master-compiler-20210225T211019Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-compiler/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20210225T211019Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/compiler/20210225T211019Z/logs MASTER_JOB_NAME: STX_build_layer_compiler_master_master LAYER: compiler MY_REPO_ROOT: /localdisk/designer/jenkins/master-compiler PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/compiler From build.starlingx at gmail.com Thu Feb 25 21:57:01 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 25 Feb 2021 16:57:01 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1580 - Still Failing! In-Reply-To: <589302960.0.1614288510039.JavaMail.javamailuser@localhost> References: <589302960.0.1614288510039.JavaMail.javamailuser@localhost> Message-ID: <1528854874.5.1614290222177.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1580 Status: Still Failing Timestamp: 20210225T214154Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210225T212811Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210225T212811Z DOCKER_BUILD_ID: jenkins-master-distro-20210225T212811Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210225T212811Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210225T212811Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/distro From build.starlingx at gmail.com Thu Feb 25 22:24:13 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 25 Feb 2021 17:24:13 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 1280 - Failure! Message-ID: <499892640.8.1614291854075.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 1280 Status: Failure Timestamp: 20210225T221255Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210225T215643Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20210225T215643Z DOCKER_BUILD_ID: jenkins-master-flock-20210225T215643Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210225T215643Z/logs BUILD_IMG: true FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20210225T215643Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Thu Feb 25 22:24:15 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 25 Feb 2021 17:24:15 -0500 (EST) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 404 - Failure! Message-ID: <1056389526.11.1614291856143.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 404 Status: Failure Timestamp: 20210225T215643Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210225T215643Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: true From alexandru.dimofte at intel.com Fri Feb 26 07:49:36 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 26 Feb 2021 07:49:36 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210225T023335Z Message-ID: Sanity Test from 2021-February-25 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210225T023335Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210225T023335Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From anyrude10 at gmail.com Fri Feb 26 11:53:47 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 26 Feb 2021 17:23:47 +0530 Subject: [Starlingx-discuss] [StarlingX 4.0] - Installing StarlingX using External Registry Message-ID: Hi Team, We need to Install StarlingX Release 4.0 on our Lab machines in which there is no internet connectivity. According to the storyboard link below, https://storyboard.openstack.org/#!/story/2004711 There is a way in which we can clone all the required StarlingX Repos on a Staging Server and use that Server to install StarlingX on our Machines. However, I am unable to find any supporting document for the same. Can someone please help with the following items: - Steps to Setup Local Registry Server - Changes need to be done in /usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml file for the same Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Feb 26 16:25:24 2021 From: scott.little at windriver.com (Scott Little) Date: Fri, 26 Feb 2021 11:25:24 -0500 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 404 - Failure! In-Reply-To: <1056389526.11.1614291856143.JavaMail.javamailuser@localhost> References: <1056389526.11.1614291856143.JavaMail.javamailuser@localhost> Message-ID: Failed building the installer due to failure to obtain a loop device.  Strange since my investigation shows all loop devices free. The rebuild passed. Scott On 2021-02-25 5:24 p.m., build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_layer_flock_master_master > Build #: 404 > Status: Failure > Timestamp: 20210225T215643Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210225T215643Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Sat Feb 27 07:02:05 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sat, 27 Feb 2021 07:02:05 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210226T024233Z Message-ID: Sanity Test from 2021-February-26 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210226T024233Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210226T024233Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: