From sergei.akhmatov at xunison.com Wed Mar 1 13:18:14 2023 From: sergei.akhmatov at xunison.com (Sergei Akhmatov) Date: Wed, 1 Mar 2023 13:18:14 +0000 Subject: [Starlingx-discuss] Stx8 interface bonding issue Message-ID: Hello. I?ve have starling-x 8 release (Debian-based) installation with duplex controllers + 2 workers and I?ve ran into issue with following configuration: All nodes are on baremetal with 2x10G interfaces. Interfaces are bonded with LACP, and VLAN-interfaces are created for different networks: +--------------------------------------+--------------+----------+----------+---------+--------------+--------------------------+-----------------------------------+--------------------------------------------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | +--------------------------------------+--------------+----------+----------+---------+--------------+--------------------------+-----------------------------------+--------------------------------------------------+ | 1c259ab0-c08f-4860-976c-6cc00e4cfaed | mgmt0 | platform | vlan | 201 | [] | ['ae0'] | [] | MTU=1500 | | 4b86faef-509f-4efb-a53a-3b2d1d9b98e8 | oam0 | platform | vlan | 100 | [] | ['ae0'] | [] | MTU=1500 | | 5edb4655-a048-4839-b01d-f2ec64e5ad01 | enp2s0f1 | None | ethernet | None | ['enp2s0f1'] | [] | ['ae0'] | MTU=1500 | | 792d08ec-63cc-4ffa-aba2-8978e411433e | ae0 | None | ae | None | [] | ['enp2s0f0', 'enp2s0f1'] | ['clusterhost0', 'mgmt0', 'oam0'] | MTU=1500,AE_MODE=802.3ad,AE_XMIT_POLICY=layer2+3 | | 8d7fe091-4c63-4ef7-8983-cd2007470443 | enp2s0f0 | None | ethernet | None | ['enp2s0f0'] | [] | ['ae0'] | MTU=1500 | | e6ff860f-a6ef-49df-bde7-437665625970 | clusterhost0 | platform | vlan | 101 | [] | ['ae0'] | [] | MTU=1500 | Worker nodes after reboots start correctly, but after 24 hours mgmt IP is removed from interface and they become unavailable. It appeared that networking.service is in a failed state: sysadmin at worker-0:~$ systemctl status networking.service system/networking.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2023-02-28 13:26:44 UTC; 46s ago Docs: man:interfaces(5) Process: 2068 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE) Main PID: 2068 (code=exited, status=1/FAILURE) CPU: 561ms Because of that, dhclient process is not running, ?valid_lft? timeout is never refreshed and after lease time expiry IP address is removed from interface. Runing ?ifup -a? manually gave me an error: $ sudo /sbin/ifup -a --read-environment No iface stanza found for master ae0 run-parts: /etc/network/if-pre-up.d/ifenslave exited with return code 1 ifup: failed to bring up enp2s0f0 No iface stanza found for master ae0 run-parts: /etc/network/if-pre-up.d/ifenslave exited with return code 1 ifup: failed to bring up enp2s0f1 Googling this error led me to a number of messages about ethernet bonding somewhat broken in debian bullseye at some point: https://blog.rtsp.us/debian-11-bullseye-bonding-problem-9d8d8866117e Hacking /etc/network/if-pre-up.d/ifenslave as suggested in the article solved the problem sed -i 's/ifstate -l/ifquery -l/g' /etc/network/if-pre-up.d/ifenslave After reboot networking service is in active state and is managing dhclient processes: sysadmin at worker-0:~$ systemctl status networking.service ? networking.service - Raise network interfaces Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled) Active: active (exited) since Tue 2023-02-28 14:28:43 UTC; 52s ago Docs: man:interfaces(5) Process: 2064 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=0/SUCCESS) Main PID: 2064 (code=exited, status=0/SUCCESS) Tasks: 8 (limit: 306419) Memory: 10.0M CPU: 678ms CGroup: /system.slice/networking.service ??2436 /sbin/dhclient -4 -v -i -pf /run/dhclient.vlan101.pid -lf /var/lib/dhcp/dhclient.vlan101.leases -I -df /var/lib/dhcp/dhclient6.vlan102.leases vlan101 ??2569 /sbin/dhclient -4 -v -i -pf /run/dhclient.vlan201.pid -lf /var/lib/dhcp/dhclient.vlan201.leases -I -df /var/lib/dhcp/dhclient6.vlan202.leases vlan201 Could someone please validate the issue? What is preferred way for a fix in production environment until it could be fixed upstream? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Mar 1 15:04:50 2023 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 1 Mar 2023 09:04:50 -0600 Subject: [Starlingx-discuss] Fwd: June 2023 PTG Team Signup Kickoff In-Reply-To: References: Message-ID: Hello Everyone, As you may have seen, we are hosting an abbreviated PTG in conjunction with the Vancouver OpenInfra Summit[0]! To sign your team up, you must complete the survey[1] by April 2nd at 7:00 UTC. We NEED accurate contact information for the moderator of your team?s sessions. This is because the survey information will be used to organize the schedule signups which will be done via the PTGBot. If you are not on IRC, please get setup[2] on the OFTC network and join #openinfra-events. You are also encouraged to familiarize yourself with the PTGBot documentation[3] as well. If you have any questions, please reach out! Information about signing up for timeslots will be sent to moderators shortly after the team signup deadline. Registration is open[4] and prices will increase May 5th! Continue to visit openinfra.dev/ptg for updates. -Kendall (diablo_rojo) [0] OpenInfra Summit Site: [1] Team Survey: https://openinfra.dev/summit/vancouver-2023 https://openinfrafoundation.formstack.com/forms/june2023_ptg_survey [2] Setup IRC: https://docs.openstack.org/contributors/common/irc.html [3] PTGBot README: https://opendev.org/openstack/ptgbot/src/branch/master/README.rst [4] OpenInfra Summit Registration: https://vancouver2023.openinfra.dev/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Mar 2 02:14:21 2023 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 2 Mar 2023 02:14:21 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - Mar 1/2023 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release Team Meeting - Mar 1 2023 stx.9.0 - Release/Feature Planning: https://docs.google.com/spreadsheets/d/1aTjYzUkExodfayt-rjTv466jE-DP8b_YjrTHhXW6G9w/edit?usp=sharing - Action: PLs to add feature proposals in time of the March PTG stx.8.0 - stx.8.0 release announced on Feb 22. Woo hoo! - stx.8.0 Story Board Cleanup - https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.8.0&project_group_id=86 - 21 stories are still open. Stories continuing in the next release need to have an stx.9.0 tag added. Others should be closed (i.e. remaining tasks updated accordingly) - Action: Feature Primes to update/close stories - Weekly RC builds - In the process of being setup by Scott Little - Blogs - Proposed Topics - FEC Device Configurability (fec-operator Integration) for ACC100 & N3000 - Prime: Balendu (Mouli) Burla - PTP O-RAN Compliant API Notification - Prime: Ghada Khalil - SSH integration with remote Windows Active Directory - Agreed w/ Greg that we don't need a blog for this - Platform Single Core Tuning - Prime: Guilherme Batista Leite - O-RAN Spec Compliant O2 Interfaces - Prime: Litao Gao - Next steps: Ghada to reach out to the primes requesting a commitment / forecast and providing links to the blog location stx.7.0 - Blogs - PTP Enhancements - Multiple NIC Support / SyncE / Various NIC Support - Prime: Ghada Khalil - Forecast: mid-Sept >> end-Oct >> mid-Dec >> end-Jan / Author: Cole Walker - Status: Not Started. Need to re-forecast From Greg.Waines at windriver.com Thu Mar 2 18:40:55 2023 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 2 Mar 2023 18:40:55 +0000 Subject: [Starlingx-discuss] deploy starlingx simplex on Intel NUC In-Reply-To: References: Message-ID: See in-lined response below, Greg. From: voipas Sent: Tuesday, February 28, 2023 9:33 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] deploy starlingx simplex on Intel NUC CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hey, I would like to deploy the latest version of Starlingx on Intel NUC12WSKi5 with 32 GB RAM storage, I have Disk1: 2 TB disk and Disk2: 500 GB. According to your documentation, it is required to have a minimum two interfaces, NUC only has single. [Greg] If you are talking just about using Kubernetes on StarlingX. Then you can deploy an AIO-SX server with a SINGLE interface. See https://docs.starlingx.io/node_management/kubernetes/node_interfaces/sriov-port-sharing.html for a description of how you can share/configure a single interface as SRIOV with VFs attached to 1 or more data networks (i.e. for multus/sriov attachment to container) and the same SRIOV interface whose PF is configured with VLANs attached to platform networks (i.e. mgmt, oam, cluster-host). This is used by some of our customers. If you are talking about using OpenStack on StarlingX, the same SRIOV interface sharing can be used, with 1 or more of the VFs configured as ?data interfaces? attached to OVS ? and the same SRIOV interface whose PF is configured with VLANs attached to platform networks (i.e. mgmt, oam, cluster-host). HOWEVER ? I don?t believe this has ever been tested. That's why I have USB Ethernet. According to this article - https://github.com/marcelarosalesj/learning-starlingx/blob/master/nuc.md - I need patch starlingx kernel and create a boot disk (as per my understanding directly downloaded ISO does not have such kernel modules). [Greg] Correct So my question is : - might there be a way to deploy with a single network , and no additional ports required (to play and learn at home in simplex mode)? [Greg] See discussion above about SRIOV Port Sharing - for home lab, might it be better to install proxmox and create a VM and install Starlingx? [Greg] My recommendation would be to use Virtual Box. The MAJORITY of starlingx community members do development testing with a StarlingX Deployment in Virtual Box. See https://docs.starlingx.io/deploy_install_guides/release/virtual/install_virtualbox.html - If no, do you have the latest developer document , how can we build a custom iso with USB Ethernet? Now it looks old and for Centos, not debian... Thanks in advance -- Best Regards, Giedrius -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Mar 2 23:54:46 2023 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 2 Mar 2023 23:54:46 +0000 Subject: [Starlingx-discuss] Minutes: Community Call (Mar 1, 2023) Message-ID: Etherpad: https://etherpad.opendev.org/p/stx-status Minutes from the community call March 1, 2023 Meeting is at 7pm PDT / 10pm EDT / +1 10am CST Standing topics - Build - Main Branch Debian Builds - Green all week - stx.6.0 Weekly Builds - Green - Build Output: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/6.0/ - stx.7.0 Weekly Builds - Green - Build Output: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/7.0/ - stx.8.0 Weekly Builds - Scott Little is still working on setting up the weekly builds - Sanity - Debian Main Branch Platform Sanity - Last sanity email sent on Feb 28: https://lists.starlingx.io/pipermail/starlingx-discuss/2023-February/013876.html - Status: Green for SX. Not run for DX due to lab availability. - Debian Main Branch stx-openstack Sanity - Last sanity email sent on Feb 28: https://lists.starlingx.io/pipermail/starlingx-discuss/2023-February/013874.html - Status: Yellow - LP: https://bugs.launchpad.net/starlingx/+bug/2007303 -- intermittent issue; seen in 2 consecutive sanities - Gerrit Reviews in Need of Attention - From previous meetings: Fixes to libvirt env: https://review.opendev.org/c/starlingx/tools/+/863735 - Review comments provided in Nov; still waiting for author (Scott Kamp) to respond/address review comments - Jan 18: Some activity in the review as of Jan 15 - Feb 1: Alternative fix proposed on Jan 18; waiting for ScottK's review - Feb 15: ScottK is going back to this - Reference Links: - Active Branch (open): https://review.opendev.org/q/projects:starlingx+is:open+branch:+master - Active Branch (merged): https://review.opendev.org/q/projects:starlingx+is:merged+branch:master Topics for this week - Attendance/participation is still low for APAC friendly meetings. - Only 5 attendees this month. - No attendees from APAC. Discuss in the next community meeting whether we need to keep this alternative timeframe - Release Status - Release Planning Meeting etherpad: https://etherpad.opendev.org/p/stx-releases - stx.8.0 - stx.8.0 release milestone declared on Feb 22. Congratulations everyone! - stx.9.0 - Release Tracking Spreadsheet created: https://docs.google.com/spreadsheets/d/1aTjYzUkExodfayt-rjTv466jE-DP8b_YjrTHhXW6G9w/edit?usp=sharing - New features starting to be added - Call to community members/PLs to continue adding their feautre proposal - Would like to have the majority of features proposed by the March virtual PTG (March 28-29) ARs from Previous Meetings - None Open Requests for Help - stx.8.0 Interface Bonding issue - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-March/013877.html - Action: Ghada to ask Steve Webster (networking PL) to respond - Deployment on Intel NUC w/ single interface - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-February/013875.html - Action: Greg to respond - StarlingX Support on ARM - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-February/013819.html - Not currently supported on StarlingX, but should be more doable with the move to Debian and the yocto kernel - ARM Support would definitely require work to support, including build system support - A few years ago MarkA and Greg did some prototype work on orange-pie - MarkA responded to the thread: https://lists.starlingx.io/pipermail/starlingx-discuss/2023-February/013841.html - Status: Watch List; will leave on the community list for a few more weeks - Mar 1: No further responses since Mark's response on Feb 14. - stx-openstack apply failed after enabling cpu_dedicated_setInbox - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-January/013720.html - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-January/013724.html - Seen w/ stx.7.0. Issue only happens when setting cpu dedicated CPUs on a running system. During the unlock, the openstack fails to re-apply. - LP: https://bugs.launchpad.net/starlingx/+bug/2002157 - Assigned to Thales; team has no bandwidth to look at this for the remaining of the month. Will reach out to the reporter to share this. - Status: Open / No updates From ildiko at openstack.org Fri Mar 3 15:55:35 2023 From: ildiko at openstack.org (Ildiko Vancsa) Date: Fri, 3 Mar 2023 16:55:35 +0100 Subject: [Starlingx-discuss] OpenDev information and reminder Message-ID: <18E30496-5508-4B89-A463-04879C802B58@openstack.org> Hi All, I?m reaching out to you with information and a friendly reminder about OpenDev. During the last preparation steps of the recent 8.0 release process, the docs team encountered a Zuul build issue. During solving the issue, we identified a couple of other ones: * It was not clear to the Docs team who and how to reach out to when they discovered that there was an issue * The underlying issue first appeared in the logs on February 8, but went unnoticed until February 22 To avoid the above happening again, I found it important to share some information about OpenDev: https://opendev.org OpenDev is the set of tools and infrastructure that StarlingX and other OpenInfra communities are using for code review, testing, etc. OpenDev is also a community, where people work on maintaining and evolving the infrastructure bits and pieces for the benefit of every community that are using OpenDev resources. It is also important to mention that OpenDev is separate from Zuul. It would be beneficial for the StarlingX community to collaborate closer with the OpenDev folks to help maintain the infrastructure and ensure that issues are uncovered and solved quickly. You can reach out and interact with the OpenDev community on IRC or their mailing list: * IRC #opendev on OFTC * Mailing list: service-discuss at lists.opendev.org - http://lists.opendev.org/cgi-bin/mailman/listinfo/service-discuss I also reported a bug and will work on the fix, to add the above information to the StarlingX Documentation https://bugs.launchpad.net/starlingx/+bug/2009185 Please let me know if you have any questions. Best Regards, Ildik? ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation From Dan.Voiculeasa at windriver.com Fri Mar 3 17:31:30 2023 From: Dan.Voiculeasa at windriver.com (Voiculeasa, Dan) Date: Fri, 3 Mar 2023 17:31:30 +0000 Subject: [Starlingx-discuss] StarlingX Apps general documentation In-Reply-To: References: Message-ID: An update on this: I've added information about how to upgrade apps during platform upgrades in: https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals#Upgrade_considerations Fixed the incorrect information in upgrades/auto_update section, the metadata here is used after, not during platform upgrades. https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals#upgrades.2Fauto_update Mentioned the existence of behavior/platform_managed_apps metadata, full details in progress. Thanks, Dan Voiculeasa ________________________________ From: Voiculeasa, Dan Sent: Tuesday, February 7, 2023 5:01 PM To: starlingx-discuss Subject: StarlingX Apps general documentation To all app developers out there, The only Developer refence so far is https://wiki.openstack.org/wiki/StarlingX/Containers/HowToAddNewFluxCDAppInSTX, which focuses on building a StarlingX App which uses FluxCD. The focus is more on build environment so the interaction with the App Framework itself was out of scope. In an attempt demystify the StarlingX App <-> App Framework interaction I started a new page: https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals . It is in no way complete work, the work is just at the beginning. The purpose would be to show examples on how to configure your app to have different behavior, explain what these behaviors are, list scenarios, general guidelines. The only scenarios explained so far are: * auto update of apps * keep/reset user overrides during updates * keep/reset disabled helm charts during updates I believe we can enable auto update functionality for the apps, the only reason I sent an email so early is out of curiosity, want the app devs to be aware of the auto-update scenario and then get feedback from app devs about their app specific requirements preventing automatic update from being enabled (if any). Also, will let you know when more info is added to the wiki. Thanks, Dan Voiculeasa -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dan.Voiculeasa at windriver.com Mon Mar 6 11:53:58 2023 From: Dan.Voiculeasa at windriver.com (Voiculeasa, Dan) Date: Mon, 6 Mar 2023 11:53:58 +0000 Subject: [Starlingx-discuss] StarlingX Apps general documentation In-Reply-To: References: Message-ID: Thanks, Dan Voiculeasa ________________________________ From: Voiculeasa, Dan Sent: Friday, March 3, 2023 7:31 PM To: starlingx-discuss Subject: Re: StarlingX Apps general documentation An update on this: I've added information about how to upgrade apps during platform upgrades in: https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals#Upgrade_considerations Fixed the incorrect information in upgrades/auto_update section, the metadata here is used after, not during platform upgrades. https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals#upgrades.2Fauto_update Mentioned the existence of behavior/platform_managed_apps metadata, full details in progress. Thanks, Dan Voiculeasa ________________________________ From: Voiculeasa, Dan Sent: Tuesday, February 7, 2023 5:01 PM To: starlingx-discuss Subject: StarlingX Apps general documentation To all app developers out there, The only Developer refence so far is https://wiki.openstack.org/wiki/StarlingX/Containers/HowToAddNewFluxCDAppInSTX, which focuses on building a StarlingX App which uses FluxCD. The focus is more on build environment so the interaction with the App Framework itself was out of scope. In an attempt demystify the StarlingX App <-> App Framework interaction I started a new page: https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals . It is in no way complete work, the work is just at the beginning. The purpose would be to show examples on how to configure your app to have different behavior, explain what these behaviors are, list scenarios, general guidelines. The only scenarios explained so far are: * auto update of apps * keep/reset user overrides during updates * keep/reset disabled helm charts during updates I believe we can enable auto update functionality for the apps, the only reason I sent an email so early is out of curiosity, want the app devs to be aware of the auto-update scenario and then get feedback from app devs about their app specific requirements preventing automatic update from being enabled (if any). Also, will let you know when more info is added to the wiki. Thanks, Dan Voiculeasa -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dan.Voiculeasa at windriver.com Mon Mar 6 12:23:45 2023 From: Dan.Voiculeasa at windriver.com (Voiculeasa, Dan) Date: Mon, 6 Mar 2023 12:23:45 +0000 Subject: [Starlingx-discuss] StarlingX Apps general documentation In-Reply-To: References: Message-ID: Another update on this: After more experiments it was discovered we have the ability to automatically down-version in some cases. The auto-update procedure is explained with more details about automatic up-version/down-version in case of patch apply/remove. https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals#upgrades.2Fauto_update Controlling the behavior to allow an application to be automatically uploaded or applied after first controller unlock is described here: https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals#behavior.2Fdesired_state Thanks, Dan Voiculeasa ________________________________ From: Voiculeasa, Dan Sent: Friday, March 3, 2023 7:31 PM To: starlingx-discuss Subject: Re: StarlingX Apps general documentation An update on this: I've added information about how to upgrade apps during platform upgrades in: https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals#Upgrade_considerations Fixed the incorrect information in upgrades/auto_update section, the metadata here is used after, not during platform upgrades. https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals#upgrades.2Fauto_update Mentioned the existence of behavior/platform_managed_apps metadata, full details in progress. Thanks, Dan Voiculeasa ________________________________ From: Voiculeasa, Dan Sent: Tuesday, February 7, 2023 5:01 PM To: starlingx-discuss Subject: StarlingX Apps general documentation To all app developers out there, The only Developer refence so far is https://wiki.openstack.org/wiki/StarlingX/Containers/HowToAddNewFluxCDAppInSTX, which focuses on building a StarlingX App which uses FluxCD. The focus is more on build environment so the interaction with the App Framework itself was out of scope. In an attempt demystify the StarlingX App <-> App Framework interaction I started a new page: https://wiki.openstack.org/wiki/StarlingX/Containers/StarlingXAppsInternals . It is in no way complete work, the work is just at the beginning. The purpose would be to show examples on how to configure your app to have different behavior, explain what these behaviors are, list scenarios, general guidelines. The only scenarios explained so far are: * auto update of apps * keep/reset user overrides during updates * keep/reset disabled helm charts during updates I believe we can enable auto update functionality for the apps, the only reason I sent an email so early is out of curiosity, want the app devs to be aware of the auto-update scenario and then get feedback from app devs about their app specific requirements preventing automatic update from being enabled (if any). Also, will let you know when more info is added to the wiki. Thanks, Dan Voiculeasa -------------- next part -------------- An HTML attachment was scrubbed... URL: From Linda.Wang at windriver.com Mon Mar 6 19:55:56 2023 From: Linda.Wang at windriver.com (Wang, Linda) Date: Mon, 6 Mar 2023 19:55:56 +0000 Subject: [Starlingx-discuss] Bi-Weekly StarlingX OS Distro & Multi-OS meeting: Feb 22, 2023 Message-ID: Feb 22, 2023 Attendees: Steve, Mark, Andre, Scott 1. General Topics 0.1 Split package list betweeen repositories (ISO and container images) * * AI: Devlet will need to look closely to see if there is. any further improvement needed. (to invite him next time) * * re: how to deal with common packages, should they have the same version of the packages between the repos, etc.. * 0.2 Log into the git before download the source. * * How to pass the credential through the tooling layer when log into git repo. * * workaround: currently use repo sync outside of the containers. * * AI: Scott to write up a feature request for the tool - file a request in Story Board. 0.3 Pre-patch ISO (Luis) (Carols replacing Luis Barbosa) * AI: to invite Carlos Pocahy to this meeting to discuss Pre-patch ISO. (lwang8: Ping Carlos and his manager to make sure he attends) 2. StarlingX 8.0 0.1 Secureboot Enablement * Working on Key in a separte repo. * Need to have a different repo/storage to storage keys without changing any code. * * Scott is creating the git and manifest setup. Fix up the meta data in control file, and send that back to LiZhu. [Done] * * Key separation is done. Other work that are depended on key separation that needs code review. Will focus on that today. * * StarlingX can use 1. file type (basic soltuion) or go a bit further to provide 2. API for secureboot signing. * * For option #2 One option is to interface with the concept such as Vault. * * Sent the checksum into the Vault to get sign. and the key is always in the Vault. * * Put the public key into the public key ring. * * Level of indirection, to put the signing method(s) behind an API. 0.2 ISO size bigginer than 4G with OOT driver may need to include the kernel source/debug package that will allow OOT tree driver to be built on customer target. * This will add the ISO size back up to greater than 4G. * Extension to the ISO build standard, 9660, for longer file names, addresses the daisy chain of the ISO for larger archieve. * AI: need to investigate the "ISO daisy chain extension of another 4G, and tooling needs. Need to createa story board ticket. 0.3 Kernel Live Patching Integration with the build system * start to look into the requirements of integration of feature. 0.4 Initramfs will changed for new driver updates * per one of the patch under review: https://review.opendev.org/c/starlingx/tools/+/872867 * initramfs should *only* be updated if it needs modifications to gain access to the rootfs. This will not mean that it never gets updated, but it will be rare. * Originally the code review comment provided confusing information about initramfs was only contain the default driver. the selection of the driver verison will be selected after initramfs is used. Now in reality, initramfs need to be reviewed, the seledtion of driver version is in initramfs time. 3. Open Request for help * * Raspberry Pi support as Edge. (ARM support) * * AI: Mark to reply to the thread and investigate further. * * new features such as Swapping kernel, kernel module for in-tree modules need community member to support and maintain. Next Meeting: March 8, 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Peng.Peng at windriver.com Tue Mar 7 17:49:16 2023 From: Peng.Peng at windriver.com (Peng, Peng) Date: Tue, 7 Mar 2023 17:49:16 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20230303T070000Z Message-ID: Sanity Test from 2023 March 6 (https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230303T070000Z/outputs/iso/starlingx-intel-x86-64-cd.iso) Status: GREEN SX sanity Passed: 17 (100.0%) Failed: 0 (0.0%) Total Executed: 17 List of Test Cases: ------------------------------------------------------ PASS test_system_health_pre_session[pods] PASS test_system_health_pre_session[alarms] PASS test_system_health_pre_session[system_apps] PASS test_horizon_host_inventory_display PASS test_lock_unlock_host PASS test_pod_to_pod_connection PASS test_pod_to_service_connection PASS test_host_to_service_connection PASS test_push_docker_image_to_local_registry_active PASS test_upload_charts_via_helm_upload PASS test_host_operations_with_custom_kubectl_app PASS test_isolated_2p_2_big_pod_best_effort_HT_AIO PASS test_sriovdp_netdev_single_pod[1-1-lock/unlock] PASS test_sriovdp_netdev_connectivity_ipv4[1-1-calico-ipam] PASS test_sriovdp_mixed_add_vf_interface[1] PASS test_system_coredumps_and_crashes[core_dumps] PASS test_system_coredumps_and_crashes[crash_reports] DX lab is not available this week. We will try to resume DX sanity next week. Regards, PV team -------------- next part -------------- An HTML attachment was scrubbed... URL: From 114769003 at qq.com Wed Mar 8 12:08:20 2023 From: 114769003 at qq.com (=?utf-8?B?TGl1WW9uZ2Z1?=) Date: Wed, 8 Mar 2023 20:08:20 +0800 Subject: [Starlingx-discuss] =?utf-8?q?Re=EF=BC=9ARE=3A__how_to_install_m?= =?utf-8?q?y_own_rpm?= Message-ID: Hi?     thanks very much . so I need to make a pach  and install it .  my first step is to build the starlingx .when I build this.  It has some problems as below: could you please help me ? thanks very much. LiuYongfu 114769003 at qq.com Original From:"Bailey, Henry Albert (Al)"< Al.Bailey at windriver.com >; Date:2023/2/18 5:05 To:"LiuYongfu"< 114769003 at qq.com >;"starlingx-discuss"< starlingx-discuss at lists.starlingx.io >; Subject:RE: [Starlingx-discuss] how to install my own rpm With StarlingX, the way to add an rpm to the platform is through a patch. Within the StarlingX patching system it will install the rpm, and will prevent it from being removed by a reboot.   There is a video from the Open Infrastructure Foundation that shows an example of creating your own patch to allow an rpm to be installed.  It uses the patch_build.sh command https://www.youtube.com/watch?v=vwqhxpgaxXE     For example, if I wanted to add an RPM for pyflame  my patch_build.sh   syntax would look something like this:   # Run this script from within $MY_WORKSPACE PATH=$MY_REPO/stx/update/extras/scripts:$PATH PATCH_ID=PYFLAME DIR=std/rpmbuild/RPMS PYFLAME=pyflame-1.6.6-4.el7.tis.1.x86_64.rpm   patch_build.sh \ --id ${PATCH_ID} \ --reboot-required=Y \ --summary "New pyflame rpm " \ --desc "Adds a new pyflame rpm "\ --controller ${DIR}/${PYFLAME} \ --controller-worker ${DIR}/${PYFLAME} \ --controller-worker-lowlatency ${DIR}/${PYFLAME}     Be aware that in this example,  I am ?adding? an rpm  rather than updating one, so I needed to include special fields (--controller ,  --controller-worker ,  --controller-worker-lowlatency ) so that the platform knows that the rpm belongs on controller and AIO hosts (and in this example it will not install it on worker or storage nodes)   The video (and my example) assume you have a development environment setup with your variables, etc..  already defined.  If that is not the case, you may need to ensure you have the starlingx/update repo checked out and make the appropriate changes to the commands.     Al   From: LiuYongfu <114769003 at qq.com> Sent: Saturday, February 11, 2023 5:52 AM To: starlingx-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: D4093834 at E3047A3C.B47A086400000000.png Type: application/octet-stream Size: 120814 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F6672A2D at 00408767.B47A086400000000.jpg Type: image/jpeg Size: 823 bytes Desc: not available URL: From Al.Bailey at windriver.com Wed Mar 8 14:18:31 2023 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Wed, 8 Mar 2023 14:18:31 +0000 Subject: [Starlingx-discuss] =?utf-8?q?Re=EF=BC=9ARE=3A__how_to_install_m?= =?utf-8?q?y_own_rpm?= In-Reply-To: References: Message-ID: I think if you have the rpm already pre-built, you can probably bypass building StarlingX and just try to run the patch-build command You just need to make sure the tool can find the rpm. Someone else on this channel might have info about the build problems and environment setup Al From: LiuYongfu <114769003 at qq.com> Sent: Wednesday, March 8, 2023 7:08 AM To: Bailey, Henry Albert (Al) ; starlingx-discuss Subject: Re?RE: [Starlingx-discuss] how to install my own rpm CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi? thanks very much . so I need to make a pach and install it . my first step is to build the starlingx .when I build this. It has some problems as below: could you please help me ? thanks very much. [cid:image001.png at 01D9519E.F28B5340] ________________________________ [Image removed by sender.] LiuYongfu 114769003 at qq.com Original From:"Bailey, Henry Albert (Al)"< Al.Bailey at windriver.com >; Date:2023/2/18 5:05 To:"LiuYongfu"< 114769003 at qq.com >;"starlingx-discuss"< starlingx-discuss at lists.starlingx.io >; Subject:RE: [Starlingx-discuss] how to install my own rpm With StarlingX, the way to add an rpm to the platform is through a patch. Within the StarlingX patching system it will install the rpm, and will prevent it from being removed by a reboot. There is a video from the Open Infrastructure Foundation that shows an example of creating your own patch to allow an rpm to be installed. It uses the patch_build.sh command https://www.youtube.com/watch?v=vwqhxpgaxXE For example, if I wanted to add an RPM for pyflame my patch_build.sh syntax would look something like this: # Run this script from within $MY_WORKSPACE PATH=$MY_REPO/stx/update/extras/scripts:$PATH PATCH_ID=PYFLAME DIR=std/rpmbuild/RPMS PYFLAME=pyflame-1.6.6-4.el7.tis.1.x86_64.rpm patch_build.sh \ --id ${PATCH_ID} \ --reboot-required=Y \ --summary "New pyflame rpm " \ --desc "Adds a new pyflame rpm "\ --controller ${DIR}/${PYFLAME} \ --controller-worker ${DIR}/${PYFLAME} \ --controller-worker-lowlatency ${DIR}/${PYFLAME} Be aware that in this example, I am ?adding? an rpm rather than updating one, so I needed to include special fields (--controller , --controller-worker , --controller-worker-lowlatency ) so that the platform knows that the rpm belongs on controller and AIO hosts (and in this example it will not install it on worker or storage nodes) The video (and my example) assume you have a development environment setup with your variables, etc.. already defined. If that is not the case, you may need to ensure you have the starlingx/update repo checked out and make the appropriate changes to the commands. Al From: LiuYongfu <114769003 at qq.com> Sent: Saturday, February 11, 2023 5:52 AM To: starlingx-discuss Subject: [Starlingx-discuss] how to install my own rpm CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Brief Description ----------------- when I install my private(own) rpm to the compute node. when the node is restarted .my private rpm is deleted. so I want to know : are there some ways to install my own rpm on compute nodes or is it just a bug? Steps to Reproduce ------------------ I installed my GPU rpm packages on the compute node,and it worked right. Expected Behavior ------------------ the rpm packages works correctly all the time. Actual Behavior when reboot the compute nodes .my gpu rpm packages are removed by the system. thanks very much. [cid:~WRD0002.jpg] LiuYongfu 114769003 at qq.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ~WRD0002.jpg Type: image/jpeg Size: 823 bytes Desc: ~WRD0002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 120814 bytes Desc: image001.png URL: From ildiko at openstack.org Wed Mar 8 15:56:16 2023 From: ildiko at openstack.org (Ildiko Vancsa) Date: Wed, 8 Mar 2023 16:56:16 +0100 Subject: [Starlingx-discuss] StarlingX 8.0 media coverage Message-ID: <171A8E9B-88A1-4DA7-8BFC-E6D952C38642@openstack.org> Hi StarlingX Community, Following up on a conversation on the Community Call earlier today about the media coverage of the StarlingX 8.0 release. Here is the list of articles that cover the new release: * Heise - Edge Computing and Industrial IoT: Cloud stack StarlingX 8 switches to Debian (https://www.heise.de/news/Edge-Computing-und-IIoT-Cloud-Stack-StarlingX-8-wechselt-auf-Debian-OS-7526774.html) * Linux Magazin - StarlingX 8.0 completes move to Debian (https://www.linux-magazin.de/news/starlingx-8-0-schliesst-umzug-zu-debian-ab/) * SDxCentral - StarlingX 8.0 Targets Telcos, O-RAN With Open Source Cloud and Edge Platform (https://www.sdxcentral.com/articles/news/starlingx-8-0-targets-telcos-o-ran-with-open-source-cloud-and-edge-platform/2023/02/) * Open Source Watch - StarlingX 8: The cloud for edge computing gets a major upgrade (https://opensourcewatch.beehiiv.com/p/starlingx-8-cloud-edge-computing-gets-major-upgrade) * Database Trends and Applications - StarlingX 8.0 Delivers Support for Distributed Cloud Architecture and Edge Computing (https://www.dbta.com/Editorial/News-Flashes/StarlingX-80-Delivers-Support-for-Distributed-Cloud-Architecture-and-Edge-Computing-157308.aspx) * AiThority.com - StarlingX 8.0 Delivers Enhanced Stability, Scalability, Support for RAN, Distributed Cloud Architecture, Edge Computing Use Cases (https://aithority.com/technology/starlingx-8-0-delivers-enhanced-stability-scalability-support-for-ran-distributed-cloud-architecture-edge-computing-use-cases/) * LXer - StarlingX 8.0 Targets Telcos, O-RAN With Open Source Cloud and Edge Platform (http://lxer.com/module/newswire/view/326801/index.html) * Techzine - https://www.techzine.nl/nieuws/devops/518905/starlingx-8-0-met-aandacht-voor-schaalbarheid-en-ran-beschikbaar/ * DataCenter-Insider - StarlingX 8.0: Open-Source-Plattform f?r die Edge (https://www.datacenter-insider.de/starlingx-80-open-source-plattform-fuer-die-edge-a-0b71a3a5a62ada77e5a609941a082df4/?cmp=beleg-mail) Thanks and Best Regards, Ildik? ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation From scott.little at eng.windriver.com Wed Mar 8 15:56:50 2023 From: scott.little at eng.windriver.com (Scott.Little) Date: Wed, 8 Mar 2023 10:56:50 -0500 Subject: [Starlingx-discuss] [Build] CENGN unable to build debian master since March 3 Message-ID: <7e358ebe-7f30-e218-1d3e-8f9e03316b09@eng.windriver.com> Hi all CENGN has been unable to complete a build since March 3. Since then, four builds were attempted, and all builds hung within the post-build unit tests of python3.9_3.9.2-1.stx.1. One build was hung for nearly 48 hours. The logs are not specific on which test is hanging. Killing the hung unit test results in the overall build of the python3.9 package build failing, but a retry loop attempts to rebuild, and it again hangs in the unit tests. There is nothing in the change logs that directly affect this package.? The only build system changes relate to secure boot. signing and as the python package is not one that requires signing, I'm currently discounting those as a cause. An equivalent build using an internal WindRiver build machine has so far not hit this issue.? The main difference being that CENGN uses minikube, and the internal server is using kubernetes directly. One theory was that something change upstream that affects the content within the build containers.? However, both CENGN and the internal build server rebuild the build containers each time.? If there was a change upstream, I would expect both builds to see it. Most designers are likely using minikube, and so far I've seen no complaints from designers on this topic.? Perhaps designers are using a build environment created on or before March 3, and haven't seen it yet.? Have you encountered hung builds in the last week??? Please report the issue, and we would like to know when the build containers were last rebuilt.?? We'll be trying to setup a fresh minikube build later today. At some point we may be forced to disable the python unit tests as a work around, until a better solution presents itself. Scott From scott.little at windriver.com Wed Mar 8 16:05:17 2023 From: scott.little at windriver.com (Scott Little) Date: Wed, 8 Mar 2023 11:05:17 -0500 Subject: [Starlingx-discuss] [Build] CENGN unable to build debian master since March 3 In-Reply-To: <7e358ebe-7f30-e218-1d3e-8f9e03316b09@eng.windriver.com> References: <7e358ebe-7f30-e218-1d3e-8f9e03316b09@eng.windriver.com> Message-ID: <87a57fc7-b937-2f75-86c1-d47ba1dfd005@windriver.com> Created a LaunchPad to track the issue: https://bugs.launchpad.net/starlingx/+bug/2009722 Scott On 2023-03-08 10:56, Scott.Little wrote: > Hi all > > > CENGN has been unable to complete a build since March 3. > > Since then, four builds were attempted, and all builds hung within the > post-build unit tests of python3.9_3.9.2-1.stx.1. > > One build was hung for nearly 48 hours. > > The logs are not specific on which test is hanging. > > Killing the hung unit test results in the overall build of the > python3.9 package build failing, but a retry loop attempts to rebuild, > and it again hangs in the unit tests. > > There is nothing in the change logs that directly affect this > package.? The only build system changes relate to secure boot. signing > and as the python package is not one that requires signing, I'm > currently discounting those as a cause. > > An equivalent build using an internal WindRiver build machine has so > far not hit this issue.? The main difference being that CENGN uses > minikube, and the internal server is using kubernetes directly. > > One theory was that something change upstream that affects the content > within the build containers.? However, both CENGN and the internal > build server rebuild the build containers each time.? If there was a > change upstream, I would expect both builds to see it. > > Most designers are likely using minikube, and so far I've seen no > complaints from designers on this topic.? Perhaps designers are using > a build environment created on or before March 3, and haven't seen it > yet.? Have you encountered hung builds in the last week??? Please > report the issue, and we would like to know when the build containers > were last rebuilt.?? We'll be trying to setup a fresh minikube build > later today. > > At some point we may be forced to disable the python unit tests as a > work around, until a better solution presents itself. > > Scott > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io From Lucas.DeAtaidesBarreto at windriver.com Wed Mar 8 19:49:40 2023 From: Lucas.DeAtaidesBarreto at windriver.com (De Ataides Barreto, Lucas) Date: Wed, 8 Mar 2023 19:49:40 +0000 Subject: [Starlingx-discuss] Sanity - StarlingX + STX-Openstack MASTER Build ISO [20230301T070000Z] results - Mar-08 Message-ID: Hi all, Here are the results for StarlingX + STX-Openstack sanity using the Mar-02 build: https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230301T070000Z/outputs/iso/starlingx-intel-x86-64-cd.iso and Helm-Charts: https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230301T070000Z/outputs/helm-charts/stx-openstack-1.0-1.stx.43-debian-stable-versioned.tgz Sorry for the delayed results, there's an issue with the build that started on Mar-03: https://bugs.launchpad.net/starlingx/+bug/2009722 - CENGN has been unable to complete a build since March 3. Overall Sanity Results: YELLOW AIO-DX Baremetal with VSWITCH_TYPE=ovs Sanity Status: YELLOW Automated Test Results Summary: ------------------------------------------------------ Passed: 14 (93.33%) Failed: 1 (6.67%) Total Executed: 15 List of Test Cases: ------------------------------------------------------ PASS 20230308 18:19:51 test_ssh_to_hosts PASS 20230308 18:21:33 test_lock_unlock_host PASS 20230308 18:40:28 test_openstack_services_healthy PASS 20230308 18:41:19 test_reapply_stx_openstack_no_change[controller-0] PASS 20230308 18:47:44 test_reapply_stx_openstack_no_change[controller-1] PASS 20230308 18:52:37 test_horizon_create_delete_instance PASS 20230308 19:01:24 test_swact_controllers PASS 20230308 19:10:32 test_ping_between_two_vms[tis-centos-guest-virtio-virtio] PASS 20230308 19:17:05 test_migrate_vm[tis-centos-guest-live-None] PASS 20230308 19:22:06 test_nova_actions[tis-centos-guest-dedicated-pause-unpause] PASS 20230308 19:26:39 test_nova_actions[tis-centos-guest-dedicated-suspend-resume] FAIL 20230308 19:31:12 TestTisGuest::test_evacuate_vms PASS 20230308 19:35:07 test_system_coredumps_and_crashes[core_dumps] PASS 20230308 19:35:31 test_system_coredumps_and_crashes[crash_reports] PASS 20230308 19:35:44 test_system_alarms ----------------------------------------------------------------------- The failed test was due to the following issue: * https://bugs.launchpad.net/starlingx/+bug/2007303 - STX-Openstack: "nova live-migration" fails to live migrate after host is forcefully turned off/on Thanks, STX-Openstack Distro Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Mar 9 00:08:15 2023 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 9 Mar 2023 00:08:15 +0000 Subject: [Starlingx-discuss] Minutes: Community Call (Mar 8, 2023) Message-ID: Etherpad: https://etherpad.opendev.org/p/stx-status Minutes from the community call March 8, 2023 Standing topics - Build - Main Branch Debian Builds - No successful builds this week. Last successful build: March 3 - Builds Hung - CENGN builds have been consistently getting hung on running py39 unit tests. Doesn't seem to be related to any recent code changes. - Maybe related to the downloaded content OR the minikube setup. - Options: (1) continue to investigate. (2) Workaround: disable py39 unit tests in the short term - LP coming - Docker Image Builds expected to fail - Issue is related to Debian released a new version of git package which is used in the build and appears to be broken - LP coming - Impact on developer builds is unknown - Action: ScottL to send a specific email to the mailing list about this topic - Greg (TSC) recommendation is to implement a workaround while the build team continues the investigation - stx.6.0 Weekly Builds - Green - Build Output: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/6.0/ - stx.7.0 Weekly Builds - Green - Build Output: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/7.0/ - stx.8.0 Weekly Builds - Green - Build Output: https://mirror.starlingx.cengn.ca/mirror/starlingx/rc/8.0/ - Sanity - Debian Main Branch Platform Sanity - Last sanity email sent on Mar 7: https://lists.starlingx.io/pipermail/starlingx-discuss/2023-March/013887.html - This is using the last successful CENGN build - Status: Green for SX. Not run for DX due to lab issues. - Debian Main Branch stx-openstack Sanity - Did not execute this week due to a lab installation issue; investigation in progress. LP will be open if the issue is deemed an stx software issue. - Last sanity email sent on Feb 28: https://lists.starlingx.io/pipermail/starlingx-discuss/2023-February/013874.html - Status: Yellow - LP: https://bugs.launchpad.net/starlingx/+bug/2007303 -- intermittent issue; seen in 2 consecutive sanities - Gerrit Reviews in Need of Attention - Nothing raised in the meeting - From previous meetings: Fixes to libvirt env: https://review.opendev.org/c/starlingx/tools/+/863735 - Review comments provided in Nov; still waiting for author (Scott Kamp) to respond/address review comments - Jan 18: Some activity in the review as of Jan 15 - Feb 1: Alternative fix proposed on Jan 18; waiting for ScottK's review - Feb 15: ScottK is going back to this - Reference Links: - Active Branch (open): https://review.opendev.org/q/projects:starlingx+is:open+branch:+master - Active Branch (merged): https://review.opendev.org/q/projects:starlingx+is:merged+branch:master Topics for this week - Ghada on vacation next week. Greg will chair the meeting. - Attendance/participation is still low for APAC friendly meetings. - Only 5 attendees this month. - No attendees from APAC. - Discuss in the next community meeting whether we need to keep this alternative timeframe - No decision made on whether to continue with these meetings - Release Status - Release Planning Meeting etherpad: https://etherpad.opendev.org/p/stx-releases - stx.8.0 - stx.8.0 release milestone declared on Feb 22. Congratulations everyone! - Press release published as well as some media coverage - stx.9.0 - Release Tracking Spreadsheet created: https://docs.google.com/spreadsheets/d/1aTjYzUkExodfayt-rjTv466jE-DP8b_YjrTHhXW6G9w/edit?usp=sharing - New features starting to be added - Call to community members/PLs to continue adding their feautre proposal - Would like to have the majority of features proposed by the March virtual PTG (March 28-29) ARs from Previous Meetings - None Open Requests for Help - stx.8.0 Interface Bonding issue - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-March/013877.html - Action: Ghada to ask Steve Webster (networking PL) to respond - Deployment on Intel NUC w/ single interface - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-February/013875.html - Greg responded: https://lists.starlingx.io/pipermail/starlingx-discuss/2023-March/013880.html - Status: Closed / Greg will monitor if there are any follow-up questions - StarlingX Support on ARM - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-February/013819.html - Not currently supported on StarlingX, but should be more doable with the move to Debian and the yocto kernel - ARM Support would definitely require work to support, including build system support - A few years ago MarkA and Greg did some prototype work on orange-pie - MarkA responded to the thread: https://lists.starlingx.io/pipermail/starlingx-discuss/2023-February/013841.html - Status: Watch List; will leave on the community list for a few more weeks - Mar 1: No further responses since Mark's response on Feb 14. - Greg added this as a topic for the PTG - stx-openstack apply failed after enabling cpu_dedicated_setInbox - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-January/013720.html - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-January/013724.html - Seen w/ stx.7.0. Issue only happens when setting cpu dedicated CPUs on a running system. During the unlock, the openstack fails to re-apply. - LP: https://bugs.launchpad.net/starlingx/+bug/2002157 - Assigned to Thales; team has no bandwidth to look at this for the remaining of the month. Will reach out to the reporter to share this. - Status: Open / No updates From Ghada.Khalil at windriver.com Thu Mar 9 00:16:47 2023 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 9 Mar 2023 00:16:47 +0000 Subject: [Starlingx-discuss] No stx release meeting on March 15 Message-ID: Hello all, The stx release meeting on March 15 is cancelled as I'm out of office. We're discuss planning topics as part of the virtual PTG on March 28-29. Regards, Ghada -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Mar 9 00:17:03 2023 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 9 Mar 2023 00:17:03 +0000 Subject: [Starlingx-discuss] Canceled: Bi-weekly StarlingX Release Meeting Message-ID: Updated meeting series for the StarlingX Release Meeting Starting from Nov 9, 2022 and meeting every other week. Bi-weekly meeting on Wednesday 06:30AM PT / 09:30AM ET / 02:30PM UTC Zoom Link: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases Regards, Ghada -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1942 bytes Desc: not available URL: From scott.little at windriver.com Fri Mar 10 14:57:08 2023 From: scott.little at windriver.com (Scott Little) Date: Fri, 10 Mar 2023 09:57:08 -0500 Subject: [Starlingx-discuss] [Build] CENGN unable to build debian master since March 3 In-Reply-To: <87a57fc7-b937-2f75-86c1-d47ba1dfd005@windriver.com> References: <7e358ebe-7f30-e218-1d3e-8f9e03316b09@eng.windriver.com> <87a57fc7-b937-2f75-86c1-d47ba1dfd005@windriver.com> Message-ID: <34621a13-e577-550c-79f9-34933b9cf546@windriver.com> I could only ever reproduce this failure on CENGN. I believe the problem was a corruption of the cached/buffered copy of a file in RAM. After a reboot, *we built successfully.* /An alternative to a reboot might have been .... / /sync; echo 1 > /proc/sys/vm/drop_caches/ Scott On 2023-03-08 11:05, Scott Little wrote: > Created a LaunchPad to track the issue: > https://bugs.launchpad.net/starlingx/+bug/2009722 > > Scott > > On 2023-03-08 10:56, Scott.Little wrote: >> Hi all >> >> >> CENGN has been unable to complete a build since March 3. >> >> Since then, four builds were attempted, and all builds hung within >> the post-build unit tests of python3.9_3.9.2-1.stx.1. >> >> One build was hung for nearly 48 hours. >> >> The logs are not specific on which test is hanging. >> >> Killing the hung unit test results in the overall build of the >> python3.9 package build failing, but a retry loop attempts to >> rebuild, and it again hangs in the unit tests. >> >> There is nothing in the change logs that directly affect this >> package.? The only build system changes relate to secure boot. >> signing and as the python package is not one that requires signing, >> I'm currently discounting those as a cause. >> >> An equivalent build using an internal WindRiver build machine has so >> far not hit this issue.? The main difference being that CENGN uses >> minikube, and the internal server is using kubernetes directly. >> >> One theory was that something change upstream that affects the >> content within the build containers.? However, both CENGN and the >> internal build server rebuild the build containers each time.? If >> there was a change upstream, I would expect both builds to see it. >> >> Most designers are likely using minikube, and so far I've seen no >> complaints from designers on this topic.? Perhaps designers are using >> a build environment created on or before March 3, and haven't seen it >> yet.? Have you encountered hung builds in the last week??? Please >> report the issue, and we would like to know when the build containers >> were last rebuilt.?? We'll be trying to setup a fresh minikube build >> later today. >> >> At some point we may be forced to disable the python unit tests as a >> work around, until a better solution presents itself. >> >> Scott >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Mar 13 20:57:30 2023 From: amy at demarco.com (Amy Marrich) Date: Mon, 13 Mar 2023 15:57:30 -0500 Subject: [Starlingx-discuss] [Diversity] Diversity and Inclusion WG Meeting reminder Message-ID: This is a reminder that the Diversity and Inclusion WG will be meeting tomorrow at 14:00 UTC in the #openinfra-diversity channel on OFTC. We hope members of all OpenInfra projects join us as we look at the Code of Conduct, and continue working on planning for the OpenInfra Summit as well as Foundation-wide diversity surveys. Thanks, Amy (spotz) 0 - https://etherpad.opendev.org/p/diversity-wg-agenda From Lucas.DeAtaidesBarreto at windriver.com Tue Mar 14 12:28:43 2023 From: Lucas.DeAtaidesBarreto at windriver.com (De Ataides Barreto, Lucas) Date: Tue, 14 Mar 2023 12:28:43 +0000 Subject: [Starlingx-discuss] Sanity - StarlingX + STX-Openstack MASTER build ISO [20230313T060000Z] results - Mar-14 Message-ID: Hi all, Here are the results for StarlingX + STX-Openstack sanity using the Mar-13 build: https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230313T060000Z/outputs/iso/starlingx-intel-x86-64-cd.iso and Helm charts: https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230313T060000Z/outputs/helm-charts/stx-openstack-1.0-1.stx.44-debian-stable-versioned.tgz Overall Sanity Results: GREEN AIO-DX Baremetal with VSWITCH_TYPE=ovs Sanity Status: GREEN Automated Test Results Summary: ------------------------------------------------------ Passed: 15 (100.00%) Failed: 0 (0.0%) Total Executed: 15 List of test cases: ------------------------------------------------------ PASS????20230314 04:48:07???????test_ssh_to_hosts PASS????20230314 04:49:46???????test_lock_unlock_host PASS????20230314 05:07:53???????test_openstack_services_healthy PASS????20230314 05:08:58???????test_reapply_stx_openstack_no_change[controller-0] PASS????20230314 05:10:53???????test_reapply_stx_openstack_no_change[controller-1] PASS????20230314 05:20:05???????test_horizon_create_delete_instance PASS????20230314 05:31:04???????test_swact_controllers PASS????20230314 05:39:53???????test_ping_between_two_vms[tis-centos-guest-virtio-virtio] PASS????20230314 05:46:48???????test_migrate_vm[tis-centos-guest-live-None] PASS????20230314 05:53:32???????test_nova_actions[tis-centos-guest-dedicated-pause-unpause] PASS????20230314 05:58:28???????test_nova_actions[tis-centos-guest-dedicated-suspend-resume] PASS????20230314 06:03:08???????test_evacuate_vms PASS????20230314 06:39:38???????test_system_coredumps_and_crashes[core_dumps] PASS????20230314 06:40:00???????test_system_coredumps_and_crashes[crash_reports] PASS????20230314 06:40:12???????test_system_alarms ---------------------------------------------------------------------------------------- Thanks, STX-Openstack Distro Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From Peng.Peng at windriver.com Tue Mar 14 17:34:56 2023 From: Peng.Peng at windriver.com (Peng, Peng) Date: Tue, 14 Mar 2023 17:34:56 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20230310T084057Z Message-ID: Sanity Test from 2023 March 13 (https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230310T084057Z/outputs/iso/starlingx-intel-x86-64-cd.iso) Status: GREEN SX sanity Passed: 17 (100.0%) Failed: 0 (0.0%) Total Executed: 17 List of Test Cases: ------------------------------------------------------ PASS test_system_health_pre_session[pods] PASS test_system_health_pre_session[alarms] PASS test_system_health_pre_session[system_apps] PASS test_horizon_host_inventory_display PASS test_lock_unlock_host PASS test_pod_to_pod_connection PASS test_pod_to_service_connection PASS test_host_to_service_connection PASS test_push_docker_image_to_local_registry_active PASS test_upload_charts_via_helm_upload PASS test_host_operations_with_custom_kubectl_app PASS test_isolated_2p_2_big_pod_best_effort_HT_AIO PASS test_sriovdp_netdev_single_pod[1-1-lock/unlock] PASS test_sriovdp_netdev_connectivity_ipv4[1-1-calico-ipam] PASS test_sriovdp_mixed_add_vf_interface[1] PASS test_system_coredumps_and_crashes[core_dumps] PASS test_system_coredumps_and_crashes[crash_reports] Regards, PV team -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Mar 15 13:55:15 2023 From: scott.little at windriver.com (Scott Little) Date: Wed, 15 Mar 2023 09:55:15 -0400 Subject: [Starlingx-discuss] [Build] The master branch build timestamped 20230314T060000Z failed Message-ID: The 20230314T060000Z failed for the container image 'stx-fm-subagent'.?? It appears to have been a brief networking glitch within CENGN. [2023-03-14T16:57:41.424Z]Err:1 http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230314T060000Z/inputs/packages ./ InRelease [2023-03-14T16:57:41.424Z] Could not connect tomirror.starlingx.cengn.ca:80 (135.84.106.45). - connect (111: Connection refused) [2023-03-14T16:57:41.424Z]Err:2 http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230314T060000Z/outputs/std/packages ./ InRelease [2023-03-14T16:57:41.424Z] Unable to connect tomirror.starlingx.cengn.ca:http: ... [2023-03-14T16:57:48.422Z] E: Unable to locate package fm-common [2023-03-14T16:57:48.422Z] E: Unable to locate package fm-common-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Juanita.Balaraj at windriver.com Thu Mar 16 03:52:37 2023 From: Juanita.Balaraj at windriver.com (Balaraj, Juanita) Date: Thu, 16 Mar 2023 03:52:37 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 08-03-23 In-Reply-To: References: Message-ID: Hello All, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation Thanks, Juanita Balaraj ============ 08-Mar-23 Stx 9.0 Release Tracking Spreadsheet created: https://docs.google.com/spreadsheets/d/1aTjYzUkExodfayt-rjTv466jE-DP8b_YjrTHhXW6G9w/edit?usp=sharing StarlingX 8.0 Release: - February 22nd - Delivered on time. Updated the Version menus on all branches and created a review documenting how this works. - AR Ron Added documentation about how to contact OpenDev community for infrastructure issues Gerrit Reviews: - Open Reviews pending; https://review.opendev.org/q/starlingx/docs+status:open - Merged 1 Review https://review.opendev.org/c/starlingx/docs/+/872912 - Review merged but Zuul failed...content is not appearing in output file; Ron AR. Fixed - target disk was full. - Backslashes to be removed from rst files wherever applicable especially in code blocks, for example, \(code\_remove\) - To be cleaned up in rst files as the files are being updated for any doc Gerrit reviews. Ron did a global pass to clean these up. - Numbering in the Distributed Cloud Guide is causing issues - AR Ron to check - WIP. This is fixed as of 12/22 General Updates: - Sphinx Tools - Need to have further discussions about the version used upstream vs. downstream - AR Ron -To determine version. - Reviews for Install directory structure for Stx 6.0 / Stx 7.0 - AR Ron - Operations Guide Archive - On Hold until further clarifications are discussed with Greg (https://review.opendev.org/c/starlingx/docs/+/822030) - OpenInfra Summit & PTG: June 13-15, Vancouver, Canada - https://openinfra.dev/summit/vancouver-2023/ - Workshop Details are tracked in https://etherpad.opendev.org/p/stx_hands_on_workshop_2023 - Virtual PTG ... March 27-31, 2023 Etherpad for Virtual PTG Planning: https://etherpad.opendev.org/p/stx-ptg-planning-march-2023 ____________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Juanita.Balaraj at windriver.com Thu Mar 16 05:20:36 2023 From: Juanita.Balaraj at windriver.com (Balaraj, Juanita) Date: Thu, 16 Mar 2023 05:20:36 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 15-03-23 In-Reply-To: References: Message-ID: Hello All, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation Thanks, Juanita Balaraj ============ 15-Mar-23 Stx 9.0 Release Tracking Spreadsheet created: https://docs.google.com/spreadsheets/d/1aTjYzUkExodfayt-rjTv466jE-DP8b_YjrTHhXW6G9w/edit?usp=sharing StarlingX 8.0 Release: - Version menus on all branches and created a review documenting how this works. - AR Ron - DONE - Added documentation about how to contact opendev community for infrastructure issues - AR Ron - Open Reviews pending; https://review.opendev.org/q/starlingx/docs+status:open - Merged 1 Review https://review.opendev.org/c/starlingx/docs/+/872912 - Review merged but Zuul failed...content is not appearing in output file; Ron AR. Fixed - target disk was full. Miscellaneous Updates: - Backslashes to be removed from rst files wherever applicable especially in code blocks, for example, \(code\_remove\) - To be cleaned up in rst files as the files are being updated for any doc Gerrit reviews. Ron did a global pass to clean these up. DONE - Numbering in the Distributed Cloud Guide is causing issues - AR Ron to check - WIP. General Updates: - Sphinx Tools - Need to have further discussions about the version used upstream vs. downstream - AR Ron -To determine version. - Operations Guide Archive - On Hold until further clarifications are discussed with Greg (https://review.opendev.org/c/starlingx/docs/+/822030) - OpenInfra Summit & PTG: June 13-15, Vancouver, Canada - https://openinfra.dev/summit/vancouver-2023/ - Workshop Details are tracked in https://etherpad.opendev.org/p/stx_hands_on_workshop_2023 - Virtual PTG ... March 27-31, 2023 Etherpad for Virtual PTG Planning: https://etherpad.opendev.org/p/stx-ptg-planning-march-2023 ____________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Fri Mar 17 12:37:35 2023 From: Greg.Waines at windriver.com (Waines, Greg) Date: Fri, 17 Mar 2023 12:37:35 +0000 Subject: [Starlingx-discuss] Virtual STARLINGX PTG Planning - March 28-29, 2023 Message-ID: A reminder ... the Virtual STARLINGX PTG is in less than 2 weeks. I have put a proposed agenda in https://etherpad.opendev.org/p/stx-ptg-planning-march-2023 , based on the topics that have been proposed in recent tsc and community meetings, and proposed topics in this etherpad. All PLs and TLs should plan on attending. Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Mar 17 14:06:34 2023 From: scott.little at windriver.com (Scott Little) Date: Fri, 17 Mar 2023 10:06:34 -0400 Subject: [Starlingx-discuss] [Build] CENGN unable to build debian master since March 3 In-Reply-To: <34621a13-e577-550c-79f9-34933b9cf546@windriver.com> References: <7e358ebe-7f30-e218-1d3e-8f9e03316b09@eng.windriver.com> <87a57fc7-b937-2f75-86c1-d47ba1dfd005@windriver.com> <34621a13-e577-550c-79f9-34933b9cf546@windriver.com> Message-ID: <0c7a407c-4d87-f59e-a362-6d72275f2c23@windriver.com> CENGN is again experiencing hangs in the Python3.9 unit tests.? We are also now reproducing the hang on other build machines. LaunchPad: https://bugs.launchpad.net/starlingx/+bug/2009722 I recommend we disable unit tests in the python3.9 package, until we better understand the issue. Scott On 2023-03-10 09:57, Scott Little wrote: > > I could only ever reproduce this failure on CENGN. > > I believe the problem was a corruption of the cached/buffered copy of > a file in RAM. > > After a reboot, *we built successfully.* > > /An alternative to a reboot might have been .... > / > > /sync; echo 1 > /proc/sys/vm/drop_caches/ > > Scott > > > On 2023-03-08 11:05, Scott Little wrote: >> Created a LaunchPad to track the issue: >> https://bugs.launchpad.net/starlingx/+bug/2009722 >> >> Scott >> >> On 2023-03-08 10:56, Scott.Little wrote: >>> Hi all >>> >>> >>> CENGN has been unable to complete a build since March 3. >>> >>> Since then, four builds were attempted, and all builds hung within >>> the post-build unit tests of python3.9_3.9.2-1.stx.1. >>> >>> One build was hung for nearly 48 hours. >>> >>> The logs are not specific on which test is hanging. >>> >>> Killing the hung unit test results in the overall build of the >>> python3.9 package build failing, but a retry loop attempts to >>> rebuild, and it again hangs in the unit tests. >>> >>> There is nothing in the change logs that directly affect this >>> package.? The only build system changes relate to secure boot. >>> signing and as the python package is not one that requires signing, >>> I'm currently discounting those as a cause. >>> >>> An equivalent build using an internal WindRiver build machine has so >>> far not hit this issue.? The main difference being that CENGN uses >>> minikube, and the internal server is using kubernetes directly. >>> >>> One theory was that something change upstream that affects the >>> content within the build containers.? However, both CENGN and the >>> internal build server rebuild the build containers each time.? If >>> there was a change upstream, I would expect both builds to see it. >>> >>> Most designers are likely using minikube, and so far I've seen no >>> complaints from designers on this topic.? Perhaps designers are >>> using a build environment created on or before March 3, and haven't >>> seen it yet.? Have you encountered hung builds in the last week??? >>> Please report the issue, and we would like to know when the build >>> containers were last rebuilt.?? We'll be trying to setup a fresh >>> minikube build later today. >>> >>> At some point we may be forced to disable the python unit tests as a >>> work around, until a better solution presents itself. >>> >>> Scott >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io -------------- next part -------------- An HTML attachment was scrubbed... URL: From starlingx.build at gmail.com Fri Mar 17 14:22:26 2023 From: starlingx.build at gmail.com (starlingx.build at gmail.com) Date: Fri, 17 Mar 2023 10:22:26 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] mail_test - Build # 8 - Still Failing! In-Reply-To: <65404073.19.1679062420448.JavaMail.javamailuser@localhost> References: <65404073.19.1679062420448.JavaMail.javamailuser@localhost> Message-ID: <407515449.24.1679062947797.JavaMail.javamailuser@localhost> Project: mail_test Build #: 8 Status: Still Failing Timestamp: 20230317T142225Z Branch: Check logs at: $PUBLISH_LOGS_URL -------------------------------------------------------------------------------- Parameters From Lucas.DeAtaidesBarreto at windriver.com Tue Mar 21 14:37:23 2023 From: Lucas.DeAtaidesBarreto at windriver.com (De Ataides Barreto, Lucas) Date: Tue, 21 Mar 2023 14:37:23 +0000 Subject: [Starlingx-discuss] Sanity and Regression - StarlingX + STX-Openstack MASTER build [20230319T060000Z] results - Mar-21 Message-ID: Hi all, Starling-X + STX-Openstack Sanity Results: Overall Sanity Status: RED Build Details: Build Date: Mar-19 ISO: https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230319T060000Z/outputs/iso/starlingx-intel-x86-64-cd.iso Helm charts: https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230319T060000Z/outputs/helm-charts/stx-openstack-1.0-1.stx.44-debian-stable-versioned.tgz AIO-DX Baremetal with VSWITCH_TYPE=OVS Sanity Results: Status: YELLOW Automated Test Results Summary: ------------------------------------------------------ Passed: 14 (93.33%) Failed: 1 (6.67%) Total Executed: 15 List of Test Cases: ------------------------------------------------------ PASS 20230321 04:50:51 test_ssh_to_hosts PASS 20230321 04:52:31 test_lock_unlock_host PASS 20230321 05:11:31 test_openstack_services_healthy PASS 20230321 05:12:34 test_reapply_stx_opensVerdanatack_no_change[controller-0] PASS 20230321 05:14:37 test_reapply_stx_openstack_no_change[controller-1] PASS 20230321 05:24:31 test_horizon_create_delete_instance PASS 20230321 05:37:43 test_swact_controllers PASS 20230321 05:41:47 test_ping_between_two_vms[tis-centos-guest-virtio-virtio] PASS 20230321 05:47:21 test_migrate_vm[tis-centos-guest-live-None] PASS 20230321 05:51:37 test_nova_actions[tis-centos-guest-dedicated-pause-unpause] PASS 20230321 05:55:42 test_nova_actions[tis-centos-guest-dedicated-suspend-resume] FAIL 20230321 05:59:37 test_evacuate_vms PASS 20230321 06:31:16 test_system_coredumps_and_crashes[core_dumps] PASS 20230321 06:31:44 test_system_coredumps_and_crashes[crash_reports] PASS 20230321 06:32:03 test_system_alarms ----------------------------------------------------------------------- The failed test case is due to the issue described in this launchpad: https://bugs.launchpad.net/starlingx/+bug/2007303 - STX-Openstack: "nova live-migration" fails to live migrate after host is forcefully turned off/on Regression Results: Status: RED Automated Test Results Summary: ------------------------------------------------------ Passed: 9 (75.0%) Failed: 3 (25.0%) Total Executed: 12 List of Test Cases: ------------------------------------------------------ PASS 20230321 06:41:08 test_lldp_neighbor_remote_port PASS 20230321 06:42:21 test_kernel_module_signatures PASS 20230321 06:43:13 test_delete_heat_after_swact[OS_Cinder_Volume.yaml] PASS 20230321 06:47:49 test_multiports_on_same_network_vm_actions[virtio_x4] FAIL 20230321 07:09:46 test_cpu_pol_vm_actions[2-dedicated-image-volume] PASS 20230321 07:19:42 test_vm_mem_pool_default_config[2048] PASS 20230321 07:23:16 test_vm_mem_pool_default_config[1048576] PASS 20230321 07:30:05 test_resize_vm_positive[local_image-4_1_512-5_2_1024-image] PASS 20230321 07:38:27 test_server_group_boot_vms[affinity-2] PASS 20230321 07:44:49 test_server_group_boot_vms[anti_affinity-2] FAIL 20230321 07:51:32 test_vm_with_config_drive FAIL 20230321 07:53:48 test_lock_with_vms ----------------------------------------------------------------------- The failed test cases are due to these new launchpads: * https://bugs.launchpad.net/starlingx/+bug/2012389 - STX-Openstack: Failed to activate binding for port for live migration * https://bugs.launchpad.net/starlingx/+bug/2012392 - STX-Openstack: Failed to create volume with --image flag Thanks, STX-Openstack Distro Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Mar 21 18:31:07 2023 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 21 Mar 2023 13:31:07 -0500 Subject: [Starlingx-discuss] Fwd: PTG March 2023 Registration & Schedule In-Reply-To: References: Message-ID: Hello Everyone! The March 2023 Project Teams Gathering is right around the corner (March 27-31) and the schedule is being setup by your team leads! Slots are going fast, so make sure to get your time booked ASAP if you haven't already! You can find the schedule and available slots on the PTGbot website [1]. The PTGbot site is the during-event website to keep track of what's being discussed and any last-minute schedule changes. It is driven via commands in the #openinfra-events IRC channel (on the OFTC network) where the PTGbot listens. If you have questions about the commands that you can give the bot, check out the documentation here[2]. Also, if you haven?t connected to IRC before, here are some docs on how to get setup![3] Lastly, please don't forget to register[4] (it is free after all!). Please let us know if you have any questions via email to ptg at openinfra.dev. Thanks! -Kendall (diablo_rojo) [1] PTGbot Site: https://ptg.opendev.org/ptg.html [2] PTGbot Documentation: https://github.com/openstack/ptgbot#open-infrastructure-ptg-bot [3] Setup IRC: https://docs.openstack.org/contributors/common/irc.html [4] PTG Registration: https://openinfra-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Peng.Peng at windriver.com Wed Mar 22 13:12:47 2023 From: Peng.Peng at windriver.com (Peng, Peng) Date: Wed, 22 Mar 2023 13:12:47 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20230319T060000 Message-ID: Sanity Test from 2023 March 21 (https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230319T060000Z/outputs/iso/starlingx-intel-x86-64-cd.iso) Status: GREEN SX sanity Passed: 17 (100.0%) Failed: 0 (0.0%) Total Executed: 17 List of Test Cases: ------------------------------------------------------ PASS test_system_health_pre_session[pods] PASS test_system_health_pre_session[alarms] PASS test_system_health_pre_session[system_apps] PASS test_horizon_host_inventory_display PASS test_lock_unlock_host PASS test_pod_to_pod_connection PASS test_pod_to_service_connection PASS test_host_to_service_connection PASS test_push_docker_image_to_local_registry_active PASS test_upload_charts_via_helm_upload PASS test_host_operations_with_custom_kubectl_app PASS test_isolated_2p_2_big_pod_best_effort_HT_AIO PASS test_sriovdp_netdev_single_pod[1-1-lock/unlock] PASS test_sriovdp_netdev_connectivity_ipv4[1-1-calico-ipam] PASS test_sriovdp_mixed_add_vf_interface[1] PASS test_system_coredumps_and_crashes[core_dumps] PASS test_system_coredumps_and_crashes[crash_reports] DX sanity Passed: 23 (100.0%) Failed: 0 (0.0%) Total Executed: 23 List of Test Cases: ------------------------------------------------------ PASS test_system_health_pre_session[pods] PASS test_system_health_pre_session[alarms] PASS test_system_health_pre_session[system_apps] PASS test_horizon_host_inventory_display PASS test_lock_unlock_host PASS test_swact_controller_platform PASS test_pod_to_pod_connection PASS test_pod_to_service_connection PASS test_host_to_service_connection PASS test_push_docker_image_to_local_registry_active PASS test_push_docker_image_to_local_registry_standby PASS test_upload_charts_via_helm_upload PASS test_host_operations_with_custom_kubectl_app PASS test_force_reboot_host[active_controller-True] PASS test_force_reboot_host[active_controller-False] PASS test_force_reboot_host[standby_controller-False] PASS test_bmc_verify_bm_type_ipmi PASS test_sriovdp_netdev_single_pod[1-1-lock/unlock] PASS test_sriovdp_netdev_connectivity_ipv4[1-1-calico-ipam] PASS test_sriovdp_netdev_connectivity_ipv6[1-1-calico-ipam] PASS test_sriovdp_mixed_add_vf_interface[1] PASS test_system_coredumps_and_crashes[core_dumps] PASS test_system_coredumps_and_crashes[crash_reports] Regards, PV team -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openinfra.dev Wed Mar 22 15:01:19 2023 From: ildiko at openinfra.dev (Ildiko Vancsa) Date: Wed, 22 Mar 2023 08:01:19 -0700 Subject: [Starlingx-discuss] StarlingX website source on GitHub Message-ID: <84B1E803-C201-4574-990A-5771111BD9AA@openinfra.dev> Hi, The question of how to edit and update the StarlingX website came up during the TSC & Community Call today. The StarlingX website source is on GitHub: https://github.com/StarlingXWeb/starlingx-website/ If you have smaller changes or updates that you would like to add to the website, like fixing typos or updating links, etc, you can submit a Pull Request (PR). Please add me (ildikov on GitHub) to the reviewers, so I can ensure a speedy process to get the changes applied! If you have proposals for bigger changes, that would changes the website?s structure or affect the design, please reach out to me first. I will help you to set up next steps to discuss the change ideas with the OpenInfra Foundation?s design team, and to get them implemented on the website. Please et me know if you have any questions. Best Regards, Ildik? ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation From voipas at gmail.com Wed Mar 22 17:40:00 2023 From: voipas at gmail.com (voipas) Date: Wed, 22 Mar 2023 19:40:00 +0200 Subject: [Starlingx-discuss] failing to deploy Simplex Starlingx AIO on VirtualBox Message-ID: Hello colleagues, I need your support here. Simplex Starlingx AIO installation fails on first steps... - After installation and reboot I see that kubelet.service - Kubernetes Kubelet Server Failed. Not sure if it is normal or not at this phase... See more details below - Bootstrapping failed - Failed to provision initial system configuration. I'm trying to install Starlingx on my Intel Nuc box (i5, 64 GB RAM, 2 TB disk) with Ubuntu Desktop OS. VirtualBox version 6.1 VM configuration: - 8 vCPU (VT-X/AMD-V, Nested Paging, PAE/NX, KVM Paravirtualization) - 16 GB RAM - Storage: - Controller SATA 520 GB - Controller NVMe 20 GB - Network: - Intel Pro/1000 MT Desktop - OAM network (internet accessible) - - Intel Pro/1000 MT Desktop - Data network (internet accessible) I used latest ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/8.0.0/debian/monolithic/outputs/iso/starlingx-intel-x86-64-cd.iso So I wonder what is wrong with this deployment , am I missing something? *Kubelet failure (*/var/log/daemon.log*):* *2023-03-21T19:35:19.876 localhost systemd[1]: info Started StarlingX Affine Tasks.2023-03-21T19:35:19.968 localhost iscsid: info iSCSI daemon with pid=912 started!2023-03-21T19:35:20.057 localhost affine-tasks.sh(1211): info : Starting.2023-03-21T19:35:20.058 localhost affine-tasks.sh(1211): info : Affine all tasks, CPUS: 0-7; online=0-7 (0xff), isol=, nonisol=0-7 (0xff)2023-03-21T19:35:20.128 localhost affine-tasks.sh(1211): info : Affined 58 processes to all cores.2023-03-21T19:35:20.302 localhost systemd[1]: info kubelet.service: Scheduled restart job, restart counter is at 5.2023-03-21T19:35:20.303 localhost systemd[1]: info Stopping Kubernetes Isolated CPU Plugin Daemon...2023-03-21T19:35:20.304 localhost systemd[1]: info isolcpu_plugin.service: Succeeded.2023-03-21T19:35:20.305 localhost systemd[1]: info Stopped Kubernetes Isolated CPU Plugin Daemon.2023-03-21T19:35:20.306 localhost systemd[1]: info Stopped Kubernetes Kubelet Server.2023-03-21T19:35:20.306 localhost systemd[1]: warning kubelet.service: Start request repeated too quickly.2023-03-21T19:35:20.306 localhost systemd[1]: warning kubelet.service: Failed with result 'exit-code'.2023-03-21T19:35:20.306 localhost systemd[1]: err Failed to start Kubernetes Kubelet Server.2023-03-21T19:35:20.308 localhost systemd[1]: warning Dependency failed for Kubernetes Isolated CPU Plugin Daemon.2023-03-21T19:35:20.309 localhost systemd[1]: notice isolcpu_plugin.service: Job isolcpu_plugin.service/start failed with result 'dependency'.2023-03-21T19:35:20.514 localhost sysinv-agent[1012]: info /etc/init.d/sysinv-agent: line 114: [: =: unary operator expected* *Bootstrap failure:* *TASK [bootstrap/persist-config : Fail if populate config script throws an exception] *********************************************************************************************************************************************************************************Wednesday 22 March 2023 17:29:05 +0000 (0:00:00.024) 0:01:40.002 *******fatal: [localhost]: FAILED! => changed=false msg: Failed to provision initial system configuration.PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************localhost : ok=180 changed=45 unreachable=0 failed=1 skipped=235 rescued=0 ignored=0* *2023-03-22 17:29:05,960 p=323063 u=sysadmin n=ansible | TASK [bootstrap/persist-config : debug] ******************************************************************************************************************************************************************************************************************************2023-03-22 17:29:05,960 p=323063 u=sysadmin n=ansible | Wednesday 22 March 2023 17:29:05 +0000 (0:00:06.932) 0:01:39.978 *******2023-03-22 17:29:05,981 p=323063 u=sysadmin n=ansible | ok: [localhost] => populate_result: changed: true failed: false failed_when_result: false msg: non-zero return code rc: 1 stderr: |- Traceback (most recent call last): File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1327, in populate_service_parameter_config(client) File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1046, in populate_service_parameter_config populate_docker_kube_config(client) File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 838, in populate_docker_kube_config client.sysinv.service_parameter.delete(parameter.uuid) File "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line 45, in delete return self._delete(self._path(parameter_id)) File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", line 95, in _delete self.api.raw_request('DELETE', url) File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 224, in raw_request return self._http_request(url, method, **kwargs) File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 186, in _http_request raise exceptions.from_response( cgtsclient.exc.HTTPInternalServerError: 'int' object is not callable stderr_lines: - 'Traceback (most recent call last):' - ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1327, in ' - ' populate_service_parameter_config(client)' - ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1046, in populate_service_parameter_config' - ' populate_docker_kube_config(client)' - ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 838, in populate_docker_kube_config' - ' client.sysinv.service_parameter.delete(parameter.uuid)' - ' File "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line 45, in delete' - ' return self._delete(self._path(parameter_id))' - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", line 95, in _delete' - ' self.api.raw_request(''DELETE'', url)' - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 224, in raw_request' - ' return self._http_request(url, method, **kwargs)' - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 186, in _http_request' - ' raise exceptions.from_response(' - 'cgtsclient.exc.HTTPInternalServerError: ''int'' object is not callable' stdout: |- Updating system config... System config completed. Deleting network, routes, addresses, and address pool for network mgmt... Updating management network... Deleting network, routes, addresses, and address pool for network pxeboot... Updating pxeboot network... Deleting network, routes, addresses, and address pool for network oam... Updating oam network... Deleting network, routes, addresses, and address pool for network multicast... Updating multicast network... Deleting network, routes, addresses, and address pool for network cluster-host... Updating cluster host network... Deleting network, routes, addresses, and address pool for network cluster-pod... Updating cluster pod network... Deleting network, routes, addresses, and address pool for network cluster-service... Updating cluster service network... Network config completed. Populating/Updating DNS config... DNS config completed.* Thanks in advance -- Best Regards, Giedrius -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openinfra.dev Wed Mar 22 21:00:04 2023 From: ildiko at openinfra.dev (Ildiko Vancsa) Date: Wed, 22 Mar 2023 14:00:04 -0700 Subject: [Starlingx-discuss] Docker is sunsetting the Free Team Organizations Message-ID: <1EF5DD94-4B60-4549-BBD5-F3DFA22D7915@openinfra.dev> Hi StarlingX Community, I?m reaching out to you with regards to Docker?s recent changes in their support services for open source projects. The main change for now is that Docker is removing their Free Team Organizations support option, that many open source projects have been using. You can read more about that here: https://web.docker.com/rs/790-SSB-375/images/privatereposfaq.pdf StarlingX signed up to Docker's Open Source program in 2021, and the StarlingX project on DockerHub is still tagged as ?Sponsored OSS?. See here: https://hub.docker.com/u/starlingx The ?Sponsored OSS? tag means that our files and images that are stored on DockerHub should not be affected at this time. However, if anyone has received any communication from Docker with regards to the StarlingX assets that are stored on DockerHub OR our open source tier subscription, please reach out to me! So I can look into potential issues and fixes to make sure we don?t experience any service disruptions. There are a couple of open source communities who are setting up alternative registries to store their images and artifacts. Please let me know if that would be in interest for StarlingX as well, and I will share some more information. Thanks and Best Regards, Ildik? ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation From Douglas.Pereira at windriver.com Thu Mar 23 13:06:26 2023 From: Douglas.Pereira at windriver.com (Pereira, Douglas) Date: Thu, 23 Mar 2023 13:06:26 +0000 Subject: [Starlingx-discuss] failing to deploy Simplex Starlingx AIO on VirtualBox In-Reply-To: References: Message-ID: Hi Giedrius, Have you tried increasing the VM memory? The documentation suggests 20480 MB for the AIO-SX configuration and you are using only 16GB. Regards, Doug From: voipas Sent: Wednesday, March 22, 2023 2:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] failing to deploy Simplex Starlingx AIO on VirtualBox CAUTION: This email comes from a non Wind River email account! Do not click links or open attachments unless you recognize the sender and know the content is safe. Hello colleagues, I need your support here. Simplex Starlingx AIO installation fails on first steps... * After installation and reboot I see that kubelet.service - Kubernetes Kubelet Server Failed. Not sure if it is normal or not at this phase... See more details below * Bootstrapping failed - Failed to provision initial system configuration. I'm trying to install Starlingx on my Intel Nuc box (i5, 64 GB RAM, 2 TB disk) with Ubuntu Desktop OS. VirtualBox version 6.1 VM configuration: * 8 vCPU (VT-X/AMD-V, Nested Paging, PAE/NX, KVM Paravirtualization) * 16 GB RAM * Storage: * Controller SATA 520 GB * Controller NVMe 20 GB * Network: * Intel Pro/1000 MT Desktop - OAM network (internet accessible) * * Intel Pro/1000 MT Desktop - Data network (internet accessible) I used latest ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/8.0.0/debian/monolithic/outputs/iso/starlingx-intel-x86-64-cd.iso So I wonder what is wrong with this deployment , am I missing something? Kubelet failure (/var/log/daemon.log): 2023-03-21T19:35:19.876 localhost systemd[1]: info Started StarlingX Affine Tasks. 2023-03-21T19:35:19.968 localhost iscsid: info iSCSI daemon with pid=912 started! 2023-03-21T19:35:20.057 localhost affine-tasks.sh(1211): info : Starting. 2023-03-21T19:35:20.058 localhost affine-tasks.sh(1211): info : Affine all tasks, CPUS: 0-7; online=0-7 (0xff), isol=, nonisol=0-7 (0xff) 2023-03-21T19:35:20.128 localhost affine-tasks.sh(1211): info : Affined 58 processes to all cores. 2023-03-21T19:35:20.302 localhost systemd[1]: info kubelet.service: Scheduled restart job, restart counter is at 5. 2023-03-21T19:35:20.303 localhost systemd[1]: info Stopping Kubernetes Isolated CPU Plugin Daemon... 2023-03-21T19:35:20.304 localhost systemd[1]: info isolcpu_plugin.service: Succeeded. 2023-03-21T19:35:20.305 localhost systemd[1]: info Stopped Kubernetes Isolated CPU Plugin Daemon. 2023-03-21T19:35:20.306 localhost systemd[1]: info Stopped Kubernetes Kubelet Server. 2023-03-21T19:35:20.306 localhost systemd[1]: warning kubelet.service: Start request repeated too quickly. 2023-03-21T19:35:20.306 localhost systemd[1]: warning kubelet.service: Failed with result 'exit-code'. 2023-03-21T19:35:20.306 localhost systemd[1]: err Failed to start Kubernetes Kubelet Server. 2023-03-21T19:35:20.308 localhost systemd[1]: warning Dependency failed for Kubernetes Isolated CPU Plugin Daemon. 2023-03-21T19:35:20.309 localhost systemd[1]: notice isolcpu_plugin.service: Job isolcpu_plugin.service/start failed with result 'dependency'. 2023-03-21T19:35:20.514 localhost sysinv-agent[1012]: info /etc/init.d/sysinv-agent: line 114: [: =: unary operator expected Bootstrap failure: TASK [bootstrap/persist-config : Fail if populate config script throws an exception] ********************************************************************************************************************************************************************************* Wednesday 22 March 2023 17:29:05 +0000 (0:00:00.024) 0:01:40.002 ******* fatal: [localhost]: FAILED! => changed=false msg: Failed to provision initial system configuration. PLAY RECAP *********************************************************************************************************************************************************************************************************************************************************** localhost : ok=180 changed=45 unreachable=0 failed=1 skipped=235 rescued=0 ignored=0 2023-03-22 17:29:05,960 p=323063 u=sysadmin n=ansible | TASK [bootstrap/persist-config : debug] ********************************************************************************************************************************************************************** ******************************************************** 2023-03-22 17:29:05,960 p=323063 u=sysadmin n=ansible | Wednesday 22 March 2023 17:29:05 +0000 (0:00:06.932) 0:01:39.978 ******* 2023-03-22 17:29:05,981 p=323063 u=sysadmin n=ansible | ok: [localhost] => populate_result: changed: true failed: false failed_when_result: false msg: non-zero return code rc: 1 stderr: |- Traceback (most recent call last): File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1327, in populate_service_parameter_config(client) File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1046, in populate_service_parameter_config populate_docker_kube_config(client) File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 838, in populate_docker_kube_config client.sysinv.service_parameter.delete(parameter.uuid) File "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line 45, in delete return self._delete(self._path(parameter_id)) File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", line 95, in _delete self.api.raw_request('DELETE', url) File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 224, in raw_request return self._http_request(url, method, **kwargs) File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 186, in _http_request raise exceptions.from_response( cgtsclient.exc.HTTPInternalServerError: 'int' object is not callable stderr_lines: - 'Traceback (most recent call last):' - ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1327, in ' - ' populate_service_parameter_config(client)' - ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 1046, in populate_service_parameter_config' - ' populate_docker_kube_config(client)' - ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", line 838, in populate_docker_kube_config' - ' client.sysinv.service_parameter.delete(parameter.uuid)' - ' File "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line 45, in delete' - ' return self._delete(self._path(parameter_id))' - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", line 95, in _delete' - ' self.api.raw_request(''DELETE'', url)' - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 224, in raw_request' - ' return self._http_request(url, method, **kwargs)' - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 186, in _http_request' - ' raise exceptions.from_response(' - 'cgtsclient.exc.HTTPInternalServerError: ''int'' object is not callable' stdout: |- Updating system config... System config completed. Deleting network, routes, addresses, and address pool for network mgmt... Updating management network... Deleting network, routes, addresses, and address pool for network pxeboot... Updating pxeboot network... Deleting network, routes, addresses, and address pool for network oam... Updating oam network... Deleting network, routes, addresses, and address pool for network multicast... Updating multicast network... Deleting network, routes, addresses, and address pool for network cluster-host... Updating cluster host network... Deleting network, routes, addresses, and address pool for network cluster-pod... Updating cluster pod network... Deleting network, routes, addresses, and address pool for network cluster-service... Updating cluster service network... Network config completed. Populating/Updating DNS config... DNS config completed. Thanks in advance -- Best Regards, Giedrius -------------- next part -------------- An HTML attachment was scrubbed... URL: From voipas at gmail.com Thu Mar 23 14:25:41 2023 From: voipas at gmail.com (voipas) Date: Thu, 23 Mar 2023 16:25:41 +0200 Subject: [Starlingx-discuss] failing to deploy Simplex Starlingx AIO on VirtualBox In-Reply-To: References: Message-ID: Hey Douglas, Thanks for your response. I recreated a new VM with 24 GB RAM - again after installation, Kubelet is still not launching... Also, attaching disk layout, just in case we have sufficient space NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 520G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 29.3G 0 part /var/rootdirs/opt/platform-backup |-sda3 8:3 0 300M 0 part /boot/efi |-sda4 8:4 0 2G 0 part /boot `-sda5 8:5 0 488.4G 0 part |-cgts--vg-root--lv 253:0 0 20G 0 lvm /sysroot |-cgts--vg-var--lv 253:1 0 20G 0 lvm /var |-cgts--vg-log--lv 253:2 0 7.8G 0 lvm /var/log `-cgts--vg-scratch--lv 253:3 0 15.6G 0 lvm /var/rootdirs/scratch sr0 11:0 1 1024M 0 rom nvme0n1 259:0 0 50G 0 disk I see these kind of errors in daemon log: 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: notice /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy directory /var/run/, updating /var/run/kubernetes <86><92> /run/kubernetes; please update the tmpfiles.d/ drop-in file accordingly. 2023-03-23T13:58:51.020 localhost systemd-modules-load[438]: info Inserted module 'ib_cm' 2023-03-23T13:58:51.020 localhost systemd-modules-load[438]: info Inserted module 'ib_ucm' 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or directory. Ignoring 2023-03-23T13:58:51.021 localhost systemd[1]: info Finished Create Static Device Nodes in /dev. 2023-03-23T13:58:51.021 localhost systemd[1]: info Starting Rule-based Manager for Device Events and Files... 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted module 'ib_uverbs' 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted module 'iw_cm' 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted module 'rdma_cm' 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted module 'rdma_ucm' 2023-03-23T13:58:51.021 localhost systemd-udevd[459]: err /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:8 Unknown user 'ceph', ignoring 2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:8 Unknown group 'ceph', ignoring 2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:13 Unknown user 'ceph', ignoring 2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:13 Unknown group 'ceph', ignoring 2023-03-23T13:58:51.022 localhost systemd[1]: info Started Rule-based Manager for Device Events and Files. 2023-03-23T13:58:51.022 localhost systemd[1]: info Starting Apply Kernel Variables... 2023-03-23T13:58:51.022 localhost systemd-sysctl[482]: info Couldn't write '20' to 'fs/negative-dentry-limit', ignoring: No such file or directory 2023-03-23T13:58:51.022 localhost systemd[1]: info Finished Apply Kernel Variables. 2023-03-23T13:58:51.022 localhost systemd-udevd[474]: info Using interface naming scheme 'vSTX7_0'. 2023-03-23T13:58:51.022 localhost systemd-udevd[474]: info ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. 2023-03-23T13:58:51.022 localhost systemd-udevd[463]: info ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: notice /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy directory /var/run/, updating /var/run/kubernetes <86><92> /run/kubernetes; please update the tmpfiles.d/ drop-in file accordingly. 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or directory. Ignoring 2023-03-23T13:58:51.024 localhost systemd[1]: info Finished Create Volatile Files and Directories. 2023-03-23T13:58:51.025 localhost systemd[1]: info Started Kubernetes Kubelet Server. 2023-03-23T13:58:51.025 localhost polkitd[801]: info started daemon version 0.105 using authority implementation `local' version `0.105' 2023-03-23T13:58:51.025 localhost systemd[863]: info kubelet.service: Failed to locate executable /usr/bin/kubelet: No such file or directory 2023-03-23T13:58:51.025 localhost systemd[863]: err kubelet.service: Failed at step EXEC spawning /usr/bin/kubelet: No such file or directory 2023-03-23T13:58:51.025 localhost systemd[1]: info Starting Kubernetes Isolated CPU Plugin Daemon... 2023-03-23T13:59:10.551 localhost controller_config[1459]: info Pausing for 5 seconds... 2023-03-23T13:59:14.605 localhost lldpd[998]: info removal request for address of fe80::a00:27ff:fe85:8445%2, but no knowledge of it 2023-03-23T13:59:14.842 localhost lldpd[998]: info removal request for address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it 2023-03-23T13:59:15.565 localhost systemd[1]: notice controllerconfig.service: Main process exited, code=exited, status=1/FAILURE 2023-03-23T13:59:15.565 localhost systemd[1]: warning controllerconfig.service: Failed with result 'exit-code'. 2023-03-23T13:59:15.579 localhost systemd[1]: info Finished General StarlingX config gate. 2023-03-23T13:59:15.581 localhost systemd[1]: info Starting StarlingX Maintenance Filesystem Monitor... 2023-03-23T13:59:15.583 localhost systemd[1]: info Started Getty on tty1. 2023-03-23T13:59:15.584 localhost systemd[1]: info Reached target Login Prompts. 2023-03-23T13:59:15.586 localhost systemd[1]: info Starting StarlingX Maintenance Worker Goenable Ready... 2023-03-23T13:59:15.587 localhost systemd[1]: info Starting StarlingX Maintenance Goenable Ready... 2023-03-23T13:59:15.588 localhost systemd[1]: info Starting StarlingX Maintenance Heartbeat Client... 2023-03-23T13:59:15.589 localhost systemd[1]: info Starting Starling-X Maintenance Link Monitor... 2023-03-23T13:59:15.590 localhost systemd[1]: info Starting StarlingX Maintenance Alarm Handler Client... 2023-03-23T13:59:15.591 localhost systemd[1]: info Starting StarlingX Maintenance Logger... 2023-03-23T13:59:15.593 localhost systemd[1]: info Starting StarlingX Pxeboot Feed Refresh... 2023-03-23T13:59:15.594 localhost systemd[1]: info Starting Service Management Unit... 2023-03-23T13:59:15.597 localhost systemd[1]: info Finished StarlingX Maintenance Worker Goenable Ready. 2023-03-23T13:59:15.610 localhost goenabled[1504]: info Goenabled Ready: [ OK ] 2023-03-23T13:59:15.610 localhost systemd[1]: info Finished StarlingX Maintenance Goenable Ready. 2023-03-23T13:59:15.630 localhost lmon[1507]: info Starting lmond: OK 2023-03-23T13:59:15.630 localhost systemd[1]: info lmon.service: Can't open PID file /run/lmond.pid (yet?) after start: Operation not permitted 2023-03-23T13:59:15.633 localhost mtclog[1509]: info Starting mtclogd: OK 2023-03-23T13:59:15.634 localhost hbsClient[1506]: info Starting hbsClient: OK 2023-03-23T13:59:15.635 localhost fsmon[1501]: info Starting fsmond: OK 2023-03-23T13:59:15.636 localhost systemd[1]: info mtclog.service: Can't open PID file /run/mtclogd.pid (yet?) after start: Operation not permitted 2023-03-23T13:59:15.636 localhost systemd[1]: info hbsClient.service: Can't open PID file /run/hbsClient.pid (yet?) after start: Operation not permitted 2023-03-23T13:59:15.637 localhost systemd[1]: info fsmon.service: Can't open PID file /run/fsmond.pid (yet?) after start: Operation not permitted 2023-03-23T13:59:15.637 localhost mtcalarm[1508]: info Starting mtcalarmd: OK 2023-03-23T13:59:15.639 localhost systemd[1]: info mtcalarm.service: Can't open PID file /run/mtcalarmd.pid (yet?) after start: Operation not permitted 2023-03-23T14:03:53.832 localhost affine-tasks.sh(1218): info : Recovery wait, elapsed 301 seconds. Reason: k8s-infra not configured 2023-03-23T14:08:00.073 localhost avahi-daemon[796]: info Joining mDNS multicast group on interface enp0s3.IPv4 with address 10.0.1.3. 2023-03-23T14:08:00.074 localhost avahi-daemon[796]: info New relevant interface enp0s3.IPv4 for mDNS. 2023-03-23T14:08:00.074 localhost avahi-daemon[796]: info Registering new address record for 10.0.1.3 on enp0s3.IPv4. 2023-03-23T14:08:01.560 localhost avahi-daemon[796]: info Joining mDNS multicast group on interface enp0s3.IPv6 with address fe80::a00:27ff:fe85:8445. 2023-03-23T14:08:01.561 localhost avahi-daemon[796]: info New relevant interface enp0s3.IPv6 for mDNS. 2023-03-23T14:08:01.561 localhost avahi-daemon[796]: info Registering new address record for fe80::a00:27ff:fe85:8445 on enp0s3.*. 2023-03-23T14:08:54.436 localhost affine-tasks.sh(1218): info : Recovery wait, elapsed 602 seconds. Reason: k8s-infra not configured 2023-03-23T14:11:06.065 localhost lldpd[998]: info removal request for address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it 2023-03-23T14:13:37.869 localhost systemd[1]: info Starting Cleanup of Temporary Directories... 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: notice /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy directory /var/run/, updating /var/run/kubernetes <86><92> /run/kubernetes; please update the tmpfiles.d/ drop-in file accor dingly. 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or directory. Ignoring 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or directory. Ignoring 2023-03-23T14:13:37.912 localhost systemd[1]: info systemd-tmpfiles-clean.service: Succeeded. 2023-03-23T14:13:37.912 localhost systemd[1]: info Finished Cleanup of Temporary Directories. 2023-03-23T14:13:55.158 localhost affine-tasks.sh(1218): info : Recovery wait, elapsed 903 seconds. Reason: k8s-infra not configured 2023-03-23T14:18:55.627 localhost affine-tasks.sh(1218): info : Recovery wait, elapsed 1203 seconds. Reason: k8s-infra not configured 2023-03-23T14:22:57.658 localhost lldpd[998]: info removal request for address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it On Thu, Mar 23, 2023 at 3:06?PM Pereira, Douglas < Douglas.Pereira at windriver.com> wrote: > Hi Giedrius, > > > > Have you tried increasing the VM memory? The documentation > > suggests 20480 MB for the AIO-SX configuration and you are using only 16GB. > > > > Regards, > > Doug > > > > *From:* voipas > *Sent:* Wednesday, March 22, 2023 2:40 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] failing to deploy Simplex Starlingx AIO on > VirtualBox > > > > *CAUTION: This email comes from a non Wind River email account!* > Do not click links or open attachments unless you recognize the sender and > know the content is safe. > > Hello colleagues, > > > > I need your support here. Simplex Starlingx AIO installation fails on > first steps... > > - After installation and reboot I see that kubelet.service - > Kubernetes Kubelet Server Failed. Not sure if it is normal or not at this > phase... See more details below > - Bootstrapping failed - Failed to provision initial system > configuration. > > > > I'm trying to install Starlingx on my Intel Nuc box (i5, 64 GB RAM, 2 TB > disk) with Ubuntu Desktop OS. VirtualBox version 6.1 > > > > VM configuration: > > - 8 vCPU (VT-X/AMD-V, Nested Paging, PAE/NX, KVM Paravirtualization) > - 16 GB RAM > - Storage: > > > - Controller SATA 520 GB > - Controller NVMe 20 GB > > > - Network: > > > - Intel Pro/1000 MT Desktop - OAM network (internet accessible) > - > - Intel Pro/1000 MT Desktop - Data network (internet accessible) > > > > I used latest ISO image: > http://mirror.starlingx.cengn.ca/mirror/starlingx/release/8.0.0/debian/monolithic/outputs/iso/starlingx-intel-x86-64-cd.iso > > > > > So I wonder what is wrong with this deployment , am I missing something? > > > > > > *Kubelet failure (*/var/log/daemon.log*):* > > > > > > > > > > > > > > > > > *2023-03-21T19:35:19.876 localhost systemd[1]: info Started StarlingX > Affine Tasks. 2023-03-21T19:35:19.968 localhost iscsid: info iSCSI daemon > with pid=912 started! 2023-03-21T19:35:20.057 localhost > affine-tasks.sh(1211): info : Starting. 2023-03-21T19:35:20.058 localhost > affine-tasks.sh(1211): info : Affine all tasks, CPUS: 0-7; online=0-7 > (0xff), isol=, nonisol=0-7 (0xff) 2023-03-21T19:35:20.128 localhost > affine-tasks.sh(1211): info : Affined 58 processes to all cores. > 2023-03-21T19:35:20.302 localhost systemd[1]: info kubelet.service: > Scheduled restart job, restart counter is at 5. 2023-03-21T19:35:20.303 > localhost systemd[1]: info Stopping Kubernetes Isolated CPU Plugin > Daemon... 2023-03-21T19:35:20.304 localhost systemd[1]: info > isolcpu_plugin.service: Succeeded. 2023-03-21T19:35:20.305 localhost > systemd[1]: info Stopped Kubernetes Isolated CPU Plugin Daemon. > 2023-03-21T19:35:20.306 localhost systemd[1]: info Stopped Kubernetes > Kubelet Server. 2023-03-21T19:35:20.306 localhost systemd[1]: warning > kubelet.service: Start request repeated too quickly. > 2023-03-21T19:35:20.306 localhost systemd[1]: warning kubelet.service: > Failed with result 'exit-code'. 2023-03-21T19:35:20.306 localhost > systemd[1]: err Failed to start Kubernetes Kubelet Server. > 2023-03-21T19:35:20.308 localhost systemd[1]: warning Dependency failed for > Kubernetes Isolated CPU Plugin Daemon. 2023-03-21T19:35:20.309 localhost > systemd[1]: notice isolcpu_plugin.service: Job isolcpu_plugin.service/start > failed with result 'dependency'. 2023-03-21T19:35:20.514 localhost > sysinv-agent[1012]: info /etc/init.d/sysinv-agent: line 114: [: =: unary > operator expected* > > > > > > *Bootstrap failure:* > > > > > > > > *TASK [bootstrap/persist-config : Fail if populate config script throws an > exception] > ********************************************************************************************************************************************************************************* > Wednesday 22 March 2023 17:29:05 +0000 (0:00:00.024) 0:01:40.002 > ******* fatal: [localhost]: FAILED! => changed=false msg: Failed to > provision initial system configuration. PLAY RECAP > *********************************************************************************************************************************************************************************************************************************************************** > localhost : ok=180 changed=45 unreachable=0 failed=1 > skipped=235 rescued=0 ignored=0* > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *2023-03-22 17:29:05,960 p=323063 u=sysadmin n=ansible | TASK > [bootstrap/persist-config : debug] > ********************************************************************************************************************************************************************** > ******************************************************** 2023-03-22 > 17:29:05,960 p=323063 u=sysadmin n=ansible | Wednesday 22 March 2023 > 17:29:05 +0000 (0:00:06.932) 0:01:39.978 ******* 2023-03-22 > 17:29:05,981 p=323063 u=sysadmin n=ansible | ok: [localhost] => > populate_result: changed: true failed: false > failed_when_result: false msg: non-zero return code rc: 1 > stderr: |- Traceback (most recent call last): File > "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", > line 1327, in populate_service_parameter_config(client) > File > "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", > line 1046, in populate_service_parameter_config > populate_docker_kube_config(client) File > "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", > line 838, in populate_docker_kube_config > client.sysinv.service_parameter.delete(parameter.uuid) File > "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line > 45, in delete return self._delete(self._path(parameter_id)) > File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", line 95, > in _delete self.api.raw_request('DELETE', url) File > "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 224, in > raw_request return self._http_request(url, method, **kwargs) > File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line > 186, in _http_request raise exceptions.from_response( > cgtsclient.exc.HTTPInternalServerError: 'int' object is not callable > stderr_lines: - 'Traceback (most recent call last):' - ' File > "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", > line 1327, in ' - ' > populate_service_parameter_config(client)' - ' File > "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", > line 1046, in populate_service_parameter_config' - ' > populate_docker_kube_config(client)' - ' File > "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", > line 838, in populate_docker_kube_config' - ' > client.sysinv.service_parameter.delete(parameter.uuid)' - ' File > "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line > 45, in delete' - ' return self._delete(self._path(parameter_id))' > - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", > line 95, in _delete' - ' self.api.raw_request(''DELETE'', url)' > - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line > 224, in raw_request' - ' return self._http_request(url, method, > **kwargs)' - ' File > "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 186, in > _http_request' - ' raise exceptions.from_response(' - > 'cgtsclient.exc.HTTPInternalServerError: ''int'' object is not callable' > stdout: |- Updating system config... System config completed. > Deleting network, routes, addresses, and address pool for network > mgmt... Updating management network... Deleting network, > routes, addresses, and address pool for network pxeboot... Updating > pxeboot network... Deleting network, routes, addresses, and address > pool for network oam... Updating oam network... Deleting > network, routes, addresses, and address pool for network multicast... > Updating multicast network... Deleting network, routes, addresses, > and address pool for network cluster-host... Updating cluster host > network... Deleting network, routes, addresses, and address pool for > network cluster-pod... Updating cluster pod network... Deleting > network, routes, addresses, and address pool for network cluster-service... > Updating cluster service network... Network config completed. > Populating/Updating DNS config... DNS config completed.* > > > > Thanks in advance > > > > -- > > Best Regards, > Giedrius > -- Best Regards, Giedrius -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Mar 23 16:37:03 2023 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 23 Mar 2023 16:37:03 +0000 Subject: [Starlingx-discuss] Minutes: Community Call (Mar 22, 2023) Message-ID: Etherpad: https://etherpad.opendev.org/p/stx-status Minutes from the community call March 22, 2023 Standing topics - Build - Main Branch Debian Builds - Several build failures due to env / network outages on CENGN - Last 3 builds were successful and another one is currently in progress - Note: From Ildiko, dockerhub is changing some of their offerings, so want to confirm that the OSS designation is still applied to StarlingX - Ildiko will send a note to the mailing list to confirm w/ our dockerhub admin (Scott Little?) - stx.6.0 Weekly Builds - Green - Build Output: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/6.0/ - stx.7.0 Weekly Builds - Green - Build Output: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/7.0/ - stx.8.0 Weekly Builds - Green - Build Output: https://mirror.starlingx.cengn.ca/mirror/starlingx/rc/8.0/ - Sanity - Debian Main Branch Platform Sanity - Last sanity email sent on Mar 22: https://lists.starlingx.io/pipermail/starlingx-discuss/2023-March/013909.html - Status: Green for both SX & DX - Debian Main Branch stx-openstack Sanity - Last sanity email sent on Mar 21: https://lists.starlingx.io/pipermail/starlingx-discuss/2023-March/013907.html - Status: Red - LP: https://bugs.launchpad.net/starlingx/+bug/2007303 -- intermittent issue / likely to work on retest, but still needs further investigation - LP: https://bugs.launchpad.net/starlingx/+bug/2012389 -- new due to running additional tests; didn't investigate yet - LP: https://bugs.launchpad.net/starlingx/+bug/2012392 -- new due to running additional tests; didn't investigate yet - Team has some resourcing challenges, so not sure if there will be bandwidth to investigate before the next sanity run - Gerrit Reviews in Need of Attention - Nothing new brought up in the meeting - From previous meetings: Fixes to libvirt env: https://review.opendev.org/c/starlingx/tools/+/863735 - Review comments provided in Nov; still waiting for author (Scott Kamp) to respond/address review comments - Jan 18: Some activity in the review as of Jan 15 - Feb 1: Alternative fix proposed on Jan 18; waiting for ScottK's review - Feb 15: ScottK is going back to this - Mar 22: Next action is w/ ScottK; As per today's meeting, he'll be looking at the comments - Reference Links: - Active Branch (open): https://review.opendev.org/q/projects:starlingx+is:open+branch:+master - Active Branch (merged): https://review.opendev.org/q/projects:starlingx+is:merged+branch:master Topics for this week - Virtual PTG Next Week - March 28 - 29: https://etherpad.opendev.org/p/stx-ptg-planning-march-2023 - The release and community meetings will be cancelled due to the PTG - Action: Ghada to send emails to the mailing list w/ the cancellations - Release Status - stx.9.0 - Release Tracking Spreadsheet created: https://docs.google.com/spreadsheets/d/1aTjYzUkExodfayt-rjTv466jE-DP8b_YjrTHhXW6G9w/edit?usp=sharing - New features starting to be added - Call to community members/PLs to continue adding their feature proposal - Would like to have the majority of features proposed by the March virtual PTG (March 28-29) - New team contribution: - From Douglas Pereira: new team is staring to review the stx docs with the goal to come up with a quick installation guide for new developers to get an env setup quickly - Also looking at tools like libvirt/qemu for the virtual env. - Greg mentioned that the virtual install guides are not up-to-date. The plan was to get rid of the virtual install guides and fold them into the baremetal install guides as the difference is very small - Agreed that Douglas's team will engage with the stx doc subproject team to sync up and discuss next steps. - starlingx.io webpage - From Bruce Jones: content hasn't bee significantly updated in the last few days. - As per Ildiko, the source is on github and community members can submit a PR to update the content - Action: Ildiko to send the link to the mailing list ARs from Previous Meetings - None Open Requests for Help - stx.8.0 Interface Bonding issue - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-March/013877.html - Action: Ghada asked Steve Webster (networking PL) to respond, but no response yet. Ghada to send a reminder. - stx-openstack apply failed after enabling cpu_dedicated_setInbox - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-January/013720.html - https://lists.starlingx.io/pipermail/starlingx-discuss/2023-January/013724.html - Seen w/ stx.7.0. Issue only happens when setting cpu dedicated CPUs on a running system. During the unlock, the openstack fails to re-apply. - LP: https://bugs.launchpad.net/starlingx/+bug/2002157 - Assigned to Thales; team still has no bandwidth to look at this. - Status: Open / No updates ... still waiting for Thales/team to have time to look at this. From Ghada.Khalil at windriver.com Thu Mar 23 16:41:37 2023 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 23 Mar 2023 16:41:37 +0000 Subject: [Starlingx-discuss] Release & Community Calls cancelled next week due to PTG Message-ID: Hello all, The following calls are cancelled next week due to the PTG: - Bi-weekly stx release call scheduled on Wednesday March 29 at 9:30am Eastern - Weekly stx community call scheduled on Wednesday March 29 at 10:00am Eastern Please join us for the PTG instead. Etherpad with the schedule/agenda: https://etherpad.opendev.org/p/stx-ptg-planning-march-2023 Regards, Ghada From Juanita.Balaraj at windriver.com Thu Mar 23 17:41:49 2023 From: Juanita.Balaraj at windriver.com (Balaraj, Juanita) Date: Thu, 23 Mar 2023 17:41:49 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 22-03-23 In-Reply-To: References: Message-ID: Hello All, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation Thanks, Juanita Balaraj ============ 22-Mar-23 Stx 9.0 Release Tracking Spreadsheet: https://docs.google.com/spreadsheets/d/1aTjYzUkExodfayt-rjTv466jE-DP8b_YjrTHhXW6G9w/edit?usp=sharing Gerrit Reviews: - Open Reviews pending; https://review.opendev.org/q/starlingx/docs+status:open - To be reviewed and merged. Launchpad Doc Bugs: - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs Total 8 bugs outstanding - The DOC team to create LP defects corresponding with any DS defects - WIP. Teams to add all the impacted Releases when raising an LP bug Miscellaneous Updates: - Numbering in the Distributed Cloud Guide is causing issues - AR Ron to check - WIP. General Updates: - Sphinx Tools - Need to have further discussions about the version used upstream vs. downstream - AR Ron -To determine version. - Docker Container upstream / downstream - AR Ron (Discuss it with Greg) - Operations Guide Archive - On Hold until further clarifications are discussed with Greg (https://review.opendev.org/c/starlingx/docs/+/822030) - Virtual PTG March 27-31, 2023 - Doc Team presentation is on March 28th Etherpad for Virtual PTG Planning: https://etherpad.opendev.org/p/stx-ptg-planning-march-2023 - OpenInfra Summit & PTG: June 13-15, Vancouver, Canada - https://openinfra.dev/summit/vancouver-2023/ - Workshop Details are tracked in https://etherpad.opendev.org/p/stx_hands_on_workshop_2023 ____________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openinfra.dev Thu Mar 23 22:34:07 2023 From: ildiko at openinfra.dev (Ildiko Vancsa) Date: Thu, 23 Mar 2023 15:34:07 -0700 Subject: [Starlingx-discuss] Mission statement discussion - FEEDBACK NEEDED In-Reply-To: <76D899BE-EBE1-488E-8A54-12E362834663@openinfra.dev> References: <76D899BE-EBE1-488E-8A54-12E362834663@openinfra.dev> Message-ID: Hi Starlingx Community, I?m circling back on this thread to collect feedback about the new proposed mission statement for StarlingX: "Solve the operational problem of deploying and managing high-performance, distributed cloud infrastructure at scale.? Please respond to this thread if you have questions or feedback about the above proposal. We can also discuss it at the PTG next week. Best Regards, Ildik? ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation > On Dec 6, 2022, at 20:41, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > I?m reaching out to you with an update from today?s mission statement workshop that we had during the marketing team call today. > > The group who participated in today?s discussion, came up with the following proposal for a revised mission statement: > > "Solve the operational problem of deploying and managing high-performance, distributed cloud infrastructure at scale." > > > We decided to keep close to the current statement since the community?s focus is still on making it easier to deploy and manage distributed infrastructure on a large scale, which we felt important to highlight. > > For further notes about the discussions please see the following etherpad: https://etherpad.opendev.org/p/r.b99c5952acde86556a7164d870c08971 > > The new mission statement is not final yet, we would like to have feedback from the community before putting a new one in place! Please share your feedback about the above proposal on this thread __by the end of next Friday (December 16)__! > > > Please let me know if you have any questions. > > Thanks and Best Regards, > Ildik? > > ??? > > Ildik? V?ncsa > Senior Manager, Community & Ecosystem > Open Infrastructure Foundation > > > > >> On Nov 30, 2022, at 09:39, Ildiko Vancsa wrote: >> >> Hi StarlingX Community, >> >> I?m reaching out to you with a short update about the mission statement discussion. >> >> Unfortunately we didn?t hit critical mass on the call that was scheduled for yesterday due to some last minute conflicts that folks had. We decided to use the regular Marketing team meeting as a follow-up workshop to continue the discussion about refreshing the project?s mission statement. >> >> I would like to invite and encourage everyone who would like to participate to join. >> >> The call is scheduled for December 06 at noon PST / 2000 UTC. >> >> The Zoom bridge to dial in is: https://zoom.us/j/270117095?pwd=SFBReFFsdUdQd3FMdnMxVytiSzhUZz09 >> >> Please find the notes from our previous call here: https://etherpad.opendev.org/p/starlingx-mission-statement >> >> Please let me know if you would like me to add you to the meeting invite. >> >> Thanks and Best Regards, >> Ildik? >> >> ??? >> >> Ildik? V?ncsa >> Senior Manager, Community & Ecosystem >> Open Infrastructure Foundation >> >> >> >> >>> On Nov 16, 2022, at 20:09, Ildiko Vancsa wrote: >>> >>> Hi StarlingX Community, >>> >>> I?m reaching out to let you know that marketing team is having a follow-up workshop to continue the discussion about refreshing the project?s mission statement, and I would like to invite and encourage everyone who would like to participate to join. >>> >>> The call is scheduled to November 29 at noon PST / 2000 UTC. >>> >>> The Zoom bridge to dial in is: https://us06web.zoom.us/j/85344179503?pwd=K05kckptcmM2N3dhTEJheFVyS05OQT09 >>> >>> Please find the notes from our previous call here: https://etherpad.opendev.org/p/starlingx-mission-statement >>> >>> Please let me know if you would like me to add you to the meeting invite. >>> >>> Thanks and Best Regards, >>> Ildik? >>> >>> ??? >>> >>> Ildik? V?ncsa >>> Senior Manager, Community & Ecosystem >>> Open Infrastructure Foundation >>> >>> >>> >>> >>>> On Nov 9, 2022, at 07:32, Ildiko Vancsa wrote: >>>> >>>> Hi StarlingX Community, >>>> >>>> As a follow up to the PTG the StarlingX Marketing team started to work on proposals to update the project?s mission statement. >>>> >>>> The team would like to ask the community for input so we can have a more guided activity and better fitting results. The questions we raised during the Marketing meeting yesterday: >>>> * What will be the community?s priorities for the short to mid-term future, meaning the next 1-3 years? >>>> * What is the community?s vision with regards to challenges that the StarlingX platform is developed and shaped to solve and address? >>>> >>>> To ensure progress the Marketing team set up an ad-hoc meeting time for Monday (November 14) at 11am US Pacific Time, to have a workshop and work on options. Anyone who is interested in participating in that conversation please join us on the call! >>>> The dial-in link for the meeting is: https://windriver.zoom.us/j/97692404047?pwd=Um93SXBzNFMwL21JUmJYU3dteVNVdz09 >>>> >>>> Thanks and Best Regards, >>>> Ildik? >>>> >>>> ??? >>>> >>>> Ildik? V?ncsa >>>> Senior Manager, Community & Ecosystem >>>> Open Infrastructure Foundation >>>> >>>> >>>> >>>> >>> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io From ildiko at openinfra.dev Fri Mar 24 19:04:09 2023 From: ildiko at openinfra.dev (Ildiko Vancsa) Date: Fri, 24 Mar 2023 12:04:09 -0700 Subject: [Starlingx-discuss] Mission statement discussion - FEEDBACK NEEDED In-Reply-To: References: <76D899BE-EBE1-488E-8A54-12E362834663@openinfra.dev> Message-ID: <02A37C7B-2CCF-4212-A10C-6F4C783DEF97@openinfra.dev> Hi, I just saw that the mission statement topic is already added to the PTG agenda, currently for Wednesday, so we can have a conversation about the current proposal and next steps at the event. If anyone has comments about the current proposal, please feel free to also share that here prior to the event next week, to help bring the conversation forward. Thanks and Best Regards, Ildik? ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation > On Mar 23, 2023, at 15:34, Ildiko Vancsa wrote: > > Hi Starlingx Community, > > I?m circling back on this thread to collect feedback about the new proposed mission statement for StarlingX: > > "Solve the operational problem of deploying and managing high-performance, distributed cloud infrastructure at scale.? > > Please respond to this thread if you have questions or feedback about the above proposal. We can also discuss it at the PTG next week. > > Best Regards, > Ildik? > > ??? > > Ildik? V?ncsa > Director of Community > Open Infrastructure Foundation > > > > > >> On Dec 6, 2022, at 20:41, Ildiko Vancsa wrote: >> >> Hi StarlingX Community, >> >> I?m reaching out to you with an update from today?s mission statement workshop that we had during the marketing team call today. >> >> The group who participated in today?s discussion, came up with the following proposal for a revised mission statement: >> >> "Solve the operational problem of deploying and managing high-performance, distributed cloud infrastructure at scale." >> >> >> We decided to keep close to the current statement since the community?s focus is still on making it easier to deploy and manage distributed infrastructure on a large scale, which we felt important to highlight. >> >> For further notes about the discussions please see the following etherpad: https://etherpad.opendev.org/p/r.b99c5952acde86556a7164d870c08971 >> >> The new mission statement is not final yet, we would like to have feedback from the community before putting a new one in place! Please share your feedback about the above proposal on this thread __by the end of next Friday (December 16)__! >> >> >> Please let me know if you have any questions. >> >> Thanks and Best Regards, >> Ildik? >> >> ??? >> >> Ildik? V?ncsa >> Senior Manager, Community & Ecosystem >> Open Infrastructure Foundation >> >> >> >> >>> On Nov 30, 2022, at 09:39, Ildiko Vancsa wrote: >>> >>> Hi StarlingX Community, >>> >>> I?m reaching out to you with a short update about the mission statement discussion. >>> >>> Unfortunately we didn?t hit critical mass on the call that was scheduled for yesterday due to some last minute conflicts that folks had. We decided to use the regular Marketing team meeting as a follow-up workshop to continue the discussion about refreshing the project?s mission statement. >>> >>> I would like to invite and encourage everyone who would like to participate to join. >>> >>> The call is scheduled for December 06 at noon PST / 2000 UTC. >>> >>> The Zoom bridge to dial in is: https://zoom.us/j/270117095?pwd=SFBReFFsdUdQd3FMdnMxVytiSzhUZz09 >>> >>> Please find the notes from our previous call here: https://etherpad.opendev.org/p/starlingx-mission-statement >>> >>> Please let me know if you would like me to add you to the meeting invite. >>> >>> Thanks and Best Regards, >>> Ildik? >>> >>> ??? >>> >>> Ildik? V?ncsa >>> Senior Manager, Community & Ecosystem >>> Open Infrastructure Foundation >>> >>> >>> >>> >>>> On Nov 16, 2022, at 20:09, Ildiko Vancsa wrote: >>>> >>>> Hi StarlingX Community, >>>> >>>> I?m reaching out to let you know that marketing team is having a follow-up workshop to continue the discussion about refreshing the project?s mission statement, and I would like to invite and encourage everyone who would like to participate to join. >>>> >>>> The call is scheduled to November 29 at noon PST / 2000 UTC. >>>> >>>> The Zoom bridge to dial in is: https://us06web.zoom.us/j/85344179503?pwd=K05kckptcmM2N3dhTEJheFVyS05OQT09 >>>> >>>> Please find the notes from our previous call here: https://etherpad.opendev.org/p/starlingx-mission-statement >>>> >>>> Please let me know if you would like me to add you to the meeting invite. >>>> >>>> Thanks and Best Regards, >>>> Ildik? >>>> >>>> ??? >>>> >>>> Ildik? V?ncsa >>>> Senior Manager, Community & Ecosystem >>>> Open Infrastructure Foundation >>>> >>>> >>>> >>>> >>>>> On Nov 9, 2022, at 07:32, Ildiko Vancsa wrote: >>>>> >>>>> Hi StarlingX Community, >>>>> >>>>> As a follow up to the PTG the StarlingX Marketing team started to work on proposals to update the project?s mission statement. >>>>> >>>>> The team would like to ask the community for input so we can have a more guided activity and better fitting results. The questions we raised during the Marketing meeting yesterday: >>>>> * What will be the community?s priorities for the short to mid-term future, meaning the next 1-3 years? >>>>> * What is the community?s vision with regards to challenges that the StarlingX platform is developed and shaped to solve and address? >>>>> >>>>> To ensure progress the Marketing team set up an ad-hoc meeting time for Monday (November 14) at 11am US Pacific Time, to have a workshop and work on options. Anyone who is interested in participating in that conversation please join us on the call! >>>>> The dial-in link for the meeting is: https://windriver.zoom.us/j/97692404047?pwd=Um93SXBzNFMwL21JUmJYU3dteVNVdz09 >>>>> >>>>> Thanks and Best Regards, >>>>> Ildik? >>>>> >>>>> ??? >>>>> >>>>> Ildik? V?ncsa >>>>> Senior Manager, Community & Ecosystem >>>>> Open Infrastructure Foundation >>>>> >>>>> >>>>> >>>>> >>>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io From douglas.pereira at encora.com Fri Mar 24 20:28:38 2023 From: douglas.pereira at encora.com (Douglas Lopes Pereira) Date: Fri, 24 Mar 2023 20:28:38 +0000 Subject: [Starlingx-discuss] Mission statement discussion - FEEDBACK NEEDED In-Reply-To: <02A37C7B-2CCF-4212-A10C-6F4C783DEF97@openinfra.dev> References: <76D899BE-EBE1-488E-8A54-12E362834663@openinfra.dev> <02A37C7B-2CCF-4212-A10C-6F4C783DEF97@openinfra.dev> Message-ID: Hi Ildik?, Unfortunately, I won't be able to join the PTG discussions next week. However, I wanted to share my thoughts regarding the proposed mission statement for the StarlingX project. While I appreciate the current mission statement, I believe we should emphasize the project's versatility and its ability to address a wide range of solutions. Additionally, I think it's important to highlight that StarlingX is not just solving a problem, but providing a practical solution for real-world use cases. With that in mind, I suggest a revised mission statement: "Empower organizations to deploy and manage high-performance, distributed cloud infrastructure at scale by providing a powerful and flexible solution." Thank you for considering my input. Please let me know if there is anything else I can contribute remotely to the PTG discussions. Best regards, Doug Privileged and confidential. If this message has been received in error, please notify the sender and delete it immediately. Conte?do confidencial. Se esta mensagem foi recebida por engano, favor avisar o remetente e apag?-la imediatamente. -----Original Message----- From: Ildiko Vancsa Sent: Friday, March 24, 2023 4:04 PM To: StarlingX ML Subject: Re: [Starlingx-discuss] Mission statement discussion - FEEDBACK NEEDED [You don't often get email from ildiko at openinfra.dev. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ] External Mail: Do not click links or open attachments unless you recognize the sender and know the content is safe. Hi, I just saw that the mission statement topic is already added to the PTG agenda, currently for Wednesday, so we can have a conversation about the current proposal and next steps at the event. If anyone has comments about the current proposal, please feel free to also share that here prior to the event next week, to help bring the conversation forward. Thanks and Best Regards, Ildik? --- Ildik? V?ncsa Director of Community Open Infrastructure Foundation > On Mar 23, 2023, at 15:34, Ildiko Vancsa wrote: > > Hi Starlingx Community, > > I'm circling back on this thread to collect feedback about the new proposed mission statement for StarlingX: > > "Solve the operational problem of deploying and managing high-performance, distributed cloud infrastructure at scale." > > Please respond to this thread if you have questions or feedback about the above proposal. We can also discuss it at the PTG next week. > > Best Regards, > Ildik? > > --- > > Ildik? V?ncsa > Director of Community > Open Infrastructure Foundation > > > > > >> On Dec 6, 2022, at 20:41, Ildiko Vancsa wrote: >> >> Hi StarlingX Community, >> >> I'm reaching out to you with an update from today's mission statement workshop that we had during the marketing team call today. >> >> The group who participated in today's discussion, came up with the following proposal for a revised mission statement: >> >> "Solve the operational problem of deploying and managing high-performance, distributed cloud infrastructure at scale." >> >> >> We decided to keep close to the current statement since the community's focus is still on making it easier to deploy and manage distributed infrastructure on a large scale, which we felt important to highlight. >> >> For further notes about the discussions please see the following >> etherpad: >> https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Feth >> erpad.opendev.org%2Fp%2Fr.b99c5952acde86556a7164d870c08971&data=05%7C >> 01%7Cdouglas.pereira%40encora.com%7C58c311f015d446c54e1508db2c9b555f% >> 7C1e0c92b0f1bd441ebbe6c778f9ced553%7C0%7C0%7C638152817952507655%7CUnk >> nown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haW >> wiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=5yrw7H0yKubl8VR%2BQHQW0KXk8e1Z >> zuK9JqtE4McJTEo%3D&reserved=0 >> >> The new mission statement is not final yet, we would like to have feedback from the community before putting a new one in place! Please share your feedback about the above proposal on this thread __by the end of next Friday (December 16)__! >> >> >> Please let me know if you have any questions. >> >> Thanks and Best Regards, >> Ildik? >> >> --- >> >> Ildik? V?ncsa >> Senior Manager, Community & Ecosystem Open Infrastructure Foundation >> >> >> >> >>> On Nov 30, 2022, at 09:39, Ildiko Vancsa wrote: >>> >>> Hi StarlingX Community, >>> >>> I'm reaching out to you with a short update about the mission statement discussion. >>> >>> Unfortunately we didn't hit critical mass on the call that was scheduled for yesterday due to some last minute conflicts that folks had. We decided to use the regular Marketing team meeting as a follow-up workshop to continue the discussion about refreshing the project's mission statement. >>> >>> I would like to invite and encourage everyone who would like to participate to join. >>> >>> The call is scheduled for December 06 at noon PST / 2000 UTC. >>> >>> The Zoom bridge to dial in is: https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fzoom.us%2Fj%2F270117095%3Fpwd%3DSFBReFFsdUdQd3FMdnMxVytiSzhUZz09&data=05%7C01%7Cdouglas.pereira%40encora.com%7C58c311f015d446c54e1508db2c9b555f%7C1e0c92b0f1bd441ebbe6c778f9ced553%7C0%7C0%7C638152817952507655%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=11K2O%2B0FDTamhAon1bqxPskIgGiNL9YNi1wtOCv9yuo%3D&reserved=0 >>> >>> Please find the notes from our previous call here: https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fetherpad.opendev.org%2Fp%2Fstarlingx-mission-statement&data=05%7C01%7Cdouglas.pereira%40encora.com%7C58c311f015d446c54e1508db2c9b555f%7C1e0c92b0f1bd441ebbe6c778f9ced553%7C0%7C0%7C638152817952507655%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=r7yvsw0nkdcdzu5h2RKQw0P8Lkc%2Bq0w7aJjgTp%2Bz0CQ%3D&reserved=0 >>> >>> Please let me know if you would like me to add you to the meeting invite. >>> >>> Thanks and Best Regards, >>> Ildik? >>> >>> --- >>> >>> Ildik? V?ncsa >>> Senior Manager, Community & Ecosystem >>> Open Infrastructure Foundation >>> >>> >>> >>> >>>> On Nov 16, 2022, at 20:09, Ildiko Vancsa wrote: >>>> >>>> Hi StarlingX Community, >>>> >>>> I'm reaching out to let you know that marketing team is having a follow-up workshop to continue the discussion about refreshing the project's mission statement, and I would like to invite and encourage everyone who would like to participate to join. >>>> >>>> The call is scheduled to November 29 at noon PST / 2000 UTC. >>>> >>>> The Zoom bridge to dial in is: https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fus06web.zoom.us%2Fj%2F85344179503%3Fpwd%3DK05kckptcmM2N3dhTEJheFVyS05OQT09&data=05%7C01%7Cdouglas.pereira%40encora.com%7C58c311f015d446c54e1508db2c9b555f%7C1e0c92b0f1bd441ebbe6c778f9ced553%7C0%7C0%7C638152817952507655%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=l1sq%2BQzthFer1JpNLlfw9f9mkJPUa%2FyEYyTDrefFd4M%3D&reserved=0 >>>> >>>> Please find the notes from our previous call here: https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fetherpad.opendev.org%2Fp%2Fstarlingx-mission-statement&data=05%7C01%7Cdouglas.pereira%40encora.com%7C58c311f015d446c54e1508db2c9b555f%7C1e0c92b0f1bd441ebbe6c778f9ced553%7C0%7C0%7C638152817952507655%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=r7yvsw0nkdcdzu5h2RKQw0P8Lkc%2Bq0w7aJjgTp%2Bz0CQ%3D&reserved=0 >>>> >>>> Please let me know if you would like me to add you to the meeting invite. >>>> >>>> Thanks and Best Regards, >>>> Ildik? >>>> >>>> --- >>>> >>>> Ildik? V?ncsa >>>> Senior Manager, Community & Ecosystem >>>> Open Infrastructure Foundation >>>> >>>> >>>> >>>> >>>>> On Nov 9, 2022, at 07:32, Ildiko Vancsa wrote: >>>>> >>>>> Hi StarlingX Community, >>>>> >>>>> As a follow up to the PTG the StarlingX Marketing team started to work on proposals to update the project's mission statement. >>>>> >>>>> The team would like to ask the community for input so we can have a more guided activity and better fitting results. The questions we raised during the Marketing meeting yesterday: >>>>> * What will be the community's priorities for the short to mid-term future, meaning the next 1-3 years? >>>>> * What is the community's vision with regards to challenges that the StarlingX platform is developed and shaped to solve and address? >>>>> >>>>> To ensure progress the Marketing team set up an ad-hoc meeting time for Monday (November 14) at 11am US Pacific Time, to have a workshop and work on options. Anyone who is interested in participating in that conversation please join us on the call! >>>>> The dial-in link for the meeting is: https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwindriver.zoom.us%2Fj%2F97692404047%3Fpwd%3DUm93SXBzNFMwL21JUmJYU3dteVNVdz09&data=05%7C01%7Cdouglas.pereira%40encora.com%7C58c311f015d446c54e1508db2c9b555f%7C1e0c92b0f1bd441ebbe6c778f9ced553%7C0%7C0%7C638152817952507655%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=2XtFOeyGUdGUA4iXra2D%2FvVC%2BXq6MGgslaDyfI6YGS8%3D&reserved=0 >>>>> >>>>> Thanks and Best Regards, >>>>> Ildik? >>>>> >>>>> --- >>>>> >>>>> Ildik? V?ncsa >>>>> Senior Manager, Community & Ecosystem >>>>> Open Infrastructure Foundation >>>>> >>>>> >>>>> >>>>> >>>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io From amy at demarco.com Fri Mar 24 21:03:02 2023 From: amy at demarco.com (Amy Marrich) Date: Fri, 24 Mar 2023 16:03:02 -0500 Subject: [Starlingx-discuss] [Diversity] Diversity and Inclusion at the PTG Message-ID: I have blocked off three hours on Monday for the D&I WG to discuss the upcoming Diversity Survey(14:00 UTC) and then ongoing changes to the Code of Conduct(15:00- 16:00 UTC) and then the Summit if there is time. The agenda can be found here[0]. All projects are encouraged to attend these sessions as the WG is at the Foundation level. Thanks, Amy (spotz) 0 - https://etherpad.opendev.org/p/march2023-ptg-diversity From ildiko at openinfra.dev Sat Mar 25 01:26:59 2023 From: ildiko at openinfra.dev (Ildiko Vancsa) Date: Fri, 24 Mar 2023 18:26:59 -0700 Subject: [Starlingx-discuss] Docker is sunsetting the Free Team Organizations In-Reply-To: <1EF5DD94-4B60-4549-BBD5-F3DFA22D7915@openinfra.dev> References: <1EF5DD94-4B60-4549-BBD5-F3DFA22D7915@openinfra.dev> Message-ID: <42BAD7FD-A196-4B9B-AB2E-5AE0AEE92F2E@openinfra.dev> Hi, I have a quick update on this topic. Docker announced today (March 24) that they reversed their decision on sunsetting the 'Free Team Organizations? program: https://www.docker.com/developers/free-team-faq/ With regards to how they announced their original intent, we still need to keep a closer eye on their open source program as well. And prepare an alternate plan in case we need to take action. We can discuss it further at the PTG next week, as we already have an agenda item for the topic on Wednesday. Best Regards, Ildik? ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation > On Mar 22, 2023, at 14:00, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > I?m reaching out to you with regards to Docker?s recent changes in their support services for open source projects. > > The main change for now is that Docker is removing their Free Team Organizations support option, that many open source projects have been using. You can read more about that here: https://web.docker.com/rs/790-SSB-375/images/privatereposfaq.pdf > > > StarlingX signed up to Docker's Open Source program in 2021, and the StarlingX project on DockerHub is still tagged as ?Sponsored OSS?. See here: https://hub.docker.com/u/starlingx > > The ?Sponsored OSS? tag means that our files and images that are stored on DockerHub should not be affected at this time. > > However, if anyone has received any communication from Docker with regards to the StarlingX assets that are stored on DockerHub OR our open source tier subscription, please reach out to me! So I can look into potential issues and fixes to make sure we don?t experience any service disruptions. > > There are a couple of open source communities who are setting up alternative registries to store their images and artifacts. Please let me know if that would be in interest for StarlingX as well, and I will share some more information. > > Thanks and Best Regards, > Ildik? > > ??? > > Ildik? V?ncsa > Director of Community > Open Infrastructure Foundation > > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io From Ghada.Khalil at windriver.com Mon Mar 27 13:18:31 2023 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 27 Mar 2023 13:18:31 +0000 Subject: [Starlingx-discuss] Canceled: Bi-weekly StarlingX Release Meeting Message-ID: Cancelling due to the StarlingX PTG Updated meeting series for the StarlingX Release Meeting Starting from Nov 9, 2022 and meeting every other week. Bi-weekly meeting on Wednesday 06:30AM PT / 09:30AM ET / 02:30PM UTC Zoom Link: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases Regards, Ghada -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1981 bytes Desc: not available URL: From ildiko at openinfra.dev Mon Mar 27 13:21:33 2023 From: ildiko at openinfra.dev (Ildiko Vancsa) Date: Mon, 27 Mar 2023 06:21:33 -0700 Subject: [Starlingx-discuss] Diversity related discussions at the PTG starting soon! Message-ID: Hi StarlingX Community, The OpenInfra Diversity & Inclusion working group is meeting today at the PTG at 1400 UTC today! The WG will discuss topics like the renewing D&I survey and refreshing the Code of Conduct we have for OpenInfra Communities. Please join the session if you are interested in learning and helping out in D&I related areas. You can find dial-in information here: ptg.opendev.org Thanks, Ildik? ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation From Linda.Wang at windriver.com Mon Mar 27 13:30:24 2023 From: Linda.Wang at windriver.com (Wang, Linda) Date: Mon, 27 Mar 2023 13:30:24 +0000 Subject: [Starlingx-discuss] Bi-Weekly StarlingX OS Distro & Multi-OS Meeting: March 7, 2023 Message-ID: March 7, 2023 Attendees: SteveG, DaveletP, ScottL, CharlesS, MarkA Low attendance so we just had a few open discussions... 1. CENGN python3 package build hanging - Scott investigating 2. If Signing Server not in use (for example developer builds) then default keys should be used - Developer builds should still be signed but not use the Signing Server to prevent a disgruntled STX developer being able to publish official images 3. With the Debian 'transition' behind us Mark would like to start to review all package forks to explore alternatives to patching packages in order to satisfy STX requirements. For example ways to remove the 'notifcations of death' kernel patch or the bash 'logging' patch. Next Meeting: March 22, 2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Juanita.Balaraj at windriver.com Mon Mar 27 13:39:28 2023 From: Juanita.Balaraj at windriver.com (Balaraj, Juanita) Date: Mon, 27 Mar 2023 13:39:28 +0000 Subject: [Starlingx-discuss] StarlingX Docs Team Call Message-ID: Cancelling due to the PTG Meeting. Join us if you have interest in StarlingX docs! Call details * https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 * Dialing in from phone: * Dial (for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * Passcode: 419405 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes * The agenda and notes for each call are kept here: https://etherpad.openstack.org/p/stx-documentation -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3025 bytes Desc: not available URL: From Lucas.DeAtaidesBarreto at windriver.com Tue Mar 28 11:30:54 2023 From: Lucas.DeAtaidesBarreto at windriver.com (De Ataides Barreto, Lucas) Date: Tue, 28 Mar 2023 11:30:54 +0000 Subject: [Starlingx-discuss] Sanity and Regression - StarlingX + STX-Openstack MASTER build [20230326T060000Z] results - Mar-28 Message-ID: Hi all, StarlingX + STX-Openstack sanity and regression results: Overall status: YELLOW Build Details: Build Date: Mar-26 ISO: https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230326T060000Z/outputs/iso/starlingx-intel-x86-64-cd.iso Helm Charts: https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230326T060000Z/outputs/helm-charts/stx-openstack-1.0-1.stx.45-debian-stable-versioned.tgz AIO-DX Baremetal with VSWITCH_TYPE=OVS Sanity Results: Overall Status: GREEN Automated Test Results Summary: ------------------------------------------------------ Passed: 15 (100.00%) Failed: 0 (0.0%) Total Executed: 15 List of Test Cases: ------------------------------------------------------ PASS ?20230328?04:39:01 ??????test_ssh_to_hosts PASS ?20230328?04:40:48 ??????test_lock_unlock_host PASS ?20230328?04:59:39 ??????test_openstack_services_healthy PASS ?20230328?05:00:45 ??????test_reapply_stx_openstack_no_change[controller-0] PASS ?20230328?05:02:46 ??????test_reapply_stx_openstack_no_change[controller-1] PASS ?20230328?05:11:55 ??????test_horizon_create_delete_instance PASS ?20230328?05:17:20 ??????test_swact_controllers PASS ?20230328?05:25:44 ??????test_ping_between_two_vms[tis-centos-guest-virtio-virtio] PASS ?20230328?05:32:25 ??????test_migrate_vm[tis-centos-guest-live-None] PASS ?20230328?05:37:31 ??????test_nova_actions[tis-centos-guest-dedicated-pause-unpause] PASS ?20230328?05:42:09 ??????test_nova_actions[tis-centos-guest-dedicated-suspend-resume] PASS ?20230328?05:46:45 ??????test_evacuate_vms PASS ?20230328?06:20:49 ??????test_system_coredumps_and_crashes[core_dumps] PASS ?20230328 06:21:10 ??????test_system_coredumps_and_crashes[crash_reports] PASS ?20230328 06:21:23 ??????test_system_alarms ----------------------------------------------------------------------- Regression Results: Overall Status: YELLOW Automated Test Results Summary: ------------------------------------------------------ Passed: 10 (83.33%) Failed: 2 (16.67%) Total Executed: 12 List of Test Cases: ------------------------------------------------------ PASS ?20230328 06:33:05 ?????test_lldp_neighbor_remote_port PASS ?20230328 06:34:38 ??????test_kernel_module_signatures PASS ?20230328 06:35:39 ??????test_delete_heat_after_swact[OS_Cinder_Volume.yaml] PASS ?20230328 06:40:34 ??????test_multiports_on_same_network_vm_actions[virtio_x4] FAIL ?20230328 07:00:43 ??????test_cpu_pol_vm_actions[2-dedicated-image-volume] PASS ?20230328 07:11:24 ??????test_vm_mem_pool_default_config[2048] PASS ?20230328 07:14:22 ??????test_vm_mem_pool_default_config[1048576] PASS ?20230328 07:21:04 ??????test_resize_vm_positive[local_image-4_1_512-5_2_1024-image] PASS ?20230328 07:28:23 ??????test_server_group_boot_vms[affinity-2] PASS ?20230328 07:33:52 ??????test_server_group_boot_vms[anti_affinity-2] FAIL ?20230328 07:39:42 ??????test_vm_with_config_drive PASS ?20230328 07:41:47 ??????test_lock_with_vms ----------------------------------------------------------------------- 2 issues were reproduced: * https://bugs.launchpad.net/starlingx/+bug/2012389 - STX-Openstack: Failed to activate binding for port for live migration * https://bugs.launchpad.net/starlingx/+bug/2012392 - STX-Openstack: Failed to create volume with --image flag Thanks, STX-Openstack Distro Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From starlingx.build at gmail.com Tue Mar 28 16:48:24 2023 From: starlingx.build at gmail.com (starlingx.build at gmail.com) Date: Tue, 28 Mar 2023 12:48:24 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_debian_master - Build # 129 - Failure! Message-ID: <1989599302.35.1680022106242.JavaMail.javamailuser@localhost> Project: STX_build_debian_master Build #: 129 Status: Failure Timestamp: 20230328T060000Z Branch: Check logs at: $PUBLISH_LOGS_URL -------------------------------------------------------------------------------- Parameters BUILD_PACKAGES_LIST: CLEAN_DOWNLOADS: false BUILD_HELM_CHARTS: true USE_DOCKER_CACHE: true DOCKER_IMAGE_LIST: PUSH_DOCKER_IMAGES: true CLEAN_DOCKER: true REFRESH_SOURCE: true DRY_RUN: false CLEAN_PACKAGES: true BUILD_RT: true BUILD_PACKAGES: true CLEAN_REPOMGR: true PKG_REUSE: false JENKINS_SCRIPTS_BRANCH: master BUILD_DOCKER_BASE_IMAGE: true BUILD_DOCKER_IMAGES: true FORCE_BUILD: false REBUILD_BUILDER_IMAGES: true CLEAN_ISO: true BUILD_ISO: true From starlingx.build at gmail.com Tue Mar 28 18:58:49 2023 From: starlingx.build at gmail.com (starlingx.build at gmail.com) Date: Tue, 28 Mar 2023 14:58:49 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_debian_master - Build # 130 - Still Failing! In-Reply-To: <905302843.33.1680022101164.JavaMail.javamailuser@localhost> References: <905302843.33.1680022101164.JavaMail.javamailuser@localhost> Message-ID: <1476956488.38.1680029930056.JavaMail.javamailuser@localhost> Project: STX_build_debian_master Build #: 130 Status: Still Failing Timestamp: 20230328T185713Z Branch: Check logs at: $PUBLISH_LOGS_URL -------------------------------------------------------------------------------- Parameters BUILD_PACKAGES_LIST: CLEAN_DOWNLOADS: false BUILD_HELM_CHARTS: true USE_DOCKER_CACHE: true DOCKER_IMAGE_LIST: PUSH_DOCKER_IMAGES: true CLEAN_DOCKER: true REFRESH_SOURCE: true DRY_RUN: false CLEAN_PACKAGES: true BUILD_RT: true BUILD_PACKAGES: true CLEAN_REPOMGR: true PKG_REUSE: true JENKINS_SCRIPTS_BRANCH: master BUILD_DOCKER_BASE_IMAGE: true BUILD_DOCKER_IMAGES: true FORCE_BUILD: false REBUILD_BUILDER_IMAGES: true CLEAN_ISO: true BUILD_ISO: true From starlingx.build at gmail.com Tue Mar 28 19:34:25 2023 From: starlingx.build at gmail.com (starlingx.build at gmail.com) Date: Tue, 28 Mar 2023 15:34:25 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_debian_master - Build # 131 - Still Failing! In-Reply-To: <1137065920.36.1680029926714.JavaMail.javamailuser@localhost> References: <1137065920.36.1680029926714.JavaMail.javamailuser@localhost> Message-ID: <1640279900.41.1680032065908.JavaMail.javamailuser@localhost> Project: STX_build_debian_master Build #: 131 Status: Still Failing Timestamp: 20230328T193332Z Branch: $BRANCH Check logs at: $PUBLISH_LOGS_URL -------------------------------------------------------------------------------- Parameters BUILD_PACKAGES_LIST: CLEAN_DOWNLOADS: false BUILD_HELM_CHARTS: true USE_DOCKER_CACHE: true DOCKER_IMAGE_LIST: PUSH_DOCKER_IMAGES: true CLEAN_DOCKER: true REFRESH_SOURCE: true DRY_RUN: false CLEAN_PACKAGES: true BUILD_RT: true BUILD_PACKAGES: true CLEAN_REPOMGR: true PKG_REUSE: false JENKINS_SCRIPTS_BRANCH: master BUILD_DOCKER_BASE_IMAGE: true BUILD_DOCKER_IMAGES: true FORCE_BUILD: false REBUILD_BUILDER_IMAGES: true CLEAN_ISO: true BUILD_ISO: true From starlingx.build at gmail.com Wed Mar 29 13:26:15 2023 From: starlingx.build at gmail.com (starlingx.build at gmail.com) Date: Wed, 29 Mar 2023 09:26:15 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master mail_test - Build # 9 - Still Failing! In-Reply-To: <1263112438.22.1679062945773.JavaMail.javamailuser@localhost> References: <1263112438.22.1679062945773.JavaMail.javamailuser@localhost> Message-ID: <1572325088.44.1680096376576.JavaMail.javamailuser@localhost> Project: mail_test Build #: 9 Status: Still Failing Timestamp: 20230329T132613Z Branch: master master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230329T132613Z/logs -------------------------------------------------------------------------------- Parameters From Peng.Peng at windriver.com Wed Mar 29 13:18:00 2023 From: Peng.Peng at windriver.com (Peng, Peng) Date: Wed, 29 Mar 2023 13:18:00 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20230326T060000Z Message-ID: Sanity Test from 2023 March 28 (https://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230326T060000Z/outputs/iso/starlingx-intel-x86-64-cd.iso) Status: GREEN SX sanity Passed: 17 (100.0%) Failed: 0 (0.0%) Total Executed: 17 List of Test Cases: ------------------------------------------------------ PASS test_system_health_pre_session[pods] PASS test_system_health_pre_session[alarms] PASS test_system_health_pre_session[system_apps] PASS test_horizon_host_inventory_display PASS test_lock_unlock_host PASS test_pod_to_pod_connection PASS test_pod_to_service_connection PASS test_host_to_service_connection PASS test_push_docker_image_to_local_registry_active PASS test_upload_charts_via_helm_upload PASS test_host_operations_with_custom_kubectl_app PASS test_isolated_2p_2_big_pod_best_effort_HT_AIO PASS test_sriovdp_netdev_single_pod[1-1-lock/unlock] PASS test_sriovdp_netdev_connectivity_ipv4[1-1-calico-ipam] PASS test_sriovdp_mixed_add_vf_interface[1] PASS test_system_coredumps_and_crashes[core_dumps] PASS test_system_coredumps_and_crashes[crash_reports] DX sanity Passed: 23 (100.0%) Failed: 0 (0.0%) Total Executed: 23 List of Test Cases: ------------------------------------------------------ PASS test_system_health_pre_session[pods] PASS test_system_health_pre_session[alarms] PASS test_system_health_pre_session[system_apps] PASS test_horizon_host_inventory_display PASS test_lock_unlock_host PASS test_swact_controller_platform PASS test_pod_to_pod_connection PASS test_pod_to_service_connection PASS test_host_to_service_connection PASS test_push_docker_image_to_local_registry_active PASS test_push_docker_image_to_local_registry_standby PASS test_upload_charts_via_helm_upload PASS test_host_operations_with_custom_kubectl_app PASS test_force_reboot_host[active_controller-True] PASS test_force_reboot_host[active_controller-False] PASS test_force_reboot_host[standby_controller-False] PASS test_bmc_verify_bm_type_ipmi PASS test_sriovdp_netdev_single_pod[1-1-lock/unlock] PASS test_sriovdp_netdev_connectivity_ipv4[1-1-calico-ipam] PASS test_sriovdp_netdev_connectivity_ipv6[1-1-calico-ipam] PASS test_sriovdp_mixed_add_vf_interface[1] PASS test_system_coredumps_and_crashes[core_dumps] PASS test_system_coredumps_and_crashes[crash_reports] Regards, PV team -------------- next part -------------- An HTML attachment was scrubbed... URL: From starlingx.build at gmail.com Wed Mar 29 17:54:25 2023 From: starlingx.build at gmail.com (starlingx.build at gmail.com) Date: Wed, 29 Mar 2023 13:54:25 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master.v2 mail_test - Build # 10 - Still Failing! In-Reply-To: <107185229.42.1680096374116.JavaMail.javamailuser@localhost> References: <107185229.42.1680096374116.JavaMail.javamailuser@localhost> Message-ID: <738823934.47.1680112465978.JavaMail.javamailuser@localhost> Project: mail_test Build #: 10 Status: Still Failing Timestamp: 20230329T175423Z Branch: master.v2 master.v2 Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230329T175423Z/logs -------------------------------------------------------------------------------- Parameters From starlingx.build at gmail.com Wed Mar 29 18:07:20 2023 From: starlingx.build at gmail.com (starlingx.build at gmail.com) Date: Wed, 29 Mar 2023 14:07:20 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master mail_test - Build # 11 - Still Failing! In-Reply-To: <584530363.45.1680112463568.JavaMail.javamailuser@localhost> References: <584530363.45.1680112463568.JavaMail.javamailuser@localhost> Message-ID: <7424706.50.1680113241009.JavaMail.javamailuser@localhost> Project: mail_test Build #: 11 Status: Still Failing Timestamp: 20230329T180717Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230329T180717Z/logs -------------------------------------------------------------------------------- Parameters From starlingx.build at gmail.com Wed Mar 29 18:16:08 2023 From: starlingx.build at gmail.com (starlingx.build at gmail.com) Date: Wed, 29 Mar 2023 14:16:08 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master mail_test - Build # 12 - Still Failing! In-Reply-To: <222930253.48.1680113237892.JavaMail.javamailuser@localhost> References: <222930253.48.1680113237892.JavaMail.javamailuser@localhost> Message-ID: <1137501158.53.1680113769502.JavaMail.javamailuser@localhost> Project: mail_test Build #: 12 Status: Still Failing Timestamp: 20230329T181607Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/20230329T180717Z/logs -------------------------------------------------------------------------------- Parameters From ildiko at openinfra.dev Wed Mar 29 23:02:54 2023 From: ildiko at openinfra.dev (Ildiko Vancsa) Date: Wed, 29 Mar 2023 16:02:54 -0700 Subject: [Starlingx-discuss] StarlingX User Survey Message-ID: <5B337342-0267-4DCF-A491-9383E6644F70@openinfra.dev> Hi StarlingX Community, We talked about the new StarlingX User Survey during the PTG session today. We?ve launched a new version of the StarlingX User Survey a couple of months ago. Our goal with the new survey was to gather more information that the community can use to make informed decisions on setting priorities and make improvements to the platform as well as the project's resources. We added new questions to ask for more details about the respondents? choices in architecture and HW and SW configurations, while we also asked for more feedback about documentation and the project in general. As of now we have 5 responses. Below please find highlights of the aggregated results: * Survey respondents are in different phases of evaluation - 60% is looking into deploying Distributed Cloud as opposed to a collection of standalone clouds * 100% of the respondents said that they will be deploying a full, containerized OpenStack cloud (stx-openstack) as part of your StarlingX environment * Acceleration & performance - 80% listed 'Dedicated CPUs' - 60% listed 'Isolated CPUs' - 60% listed PTP * Services and protocols - 60% listed 'Metrics server' - 60% listed 'SNMP' - 40% listed ISTIO * Docs average score of responses - Usage score is 8 out of 10 - Readability score is 8.5 out of 10 - Complete descriptions score is 8 out of 10 - Easy to navigate score is 8.5 out of 10 * Net Promoter Score (NPS): 8.75 As a reminder, the new user survey is available here: https://openinfrafoundation.formstack.com/forms/starlingx_user_survey Please feel free to share the link within you ecosystem, to help gather more information from people and organizations who?re using or evaluating StarlingX. Thanks and Best Regards, Ildik? ??? Ildik? V?ncsa Director of Community Open Infrastructure Foundation From voipas at gmail.com Fri Mar 31 07:50:03 2023 From: voipas at gmail.com (voipas) Date: Fri, 31 Mar 2023 10:50:03 +0300 Subject: [Starlingx-discuss] failing to deploy Simplex Starlingx AIO on VirtualBox In-Reply-To: References: Message-ID: Hey, any other recommendations? On 2023-03-23, Thu at 16:25, voipas wrote: > Hey Douglas, > > Thanks for your response. I recreated a new VM with 24 GB RAM - again > after installation, Kubelet is still not launching... Also, attaching disk > layout, just in case we have sufficient space > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > sda 8:0 0 520G 0 disk > |-sda1 8:1 0 1M 0 part > |-sda2 8:2 0 29.3G 0 part > /var/rootdirs/opt/platform-backup > |-sda3 8:3 0 300M 0 part /boot/efi > |-sda4 8:4 0 2G 0 part /boot > `-sda5 8:5 0 488.4G 0 part > |-cgts--vg-root--lv 253:0 0 20G 0 lvm /sysroot > |-cgts--vg-var--lv 253:1 0 20G 0 lvm /var > |-cgts--vg-log--lv 253:2 0 7.8G 0 lvm /var/log > `-cgts--vg-scratch--lv 253:3 0 15.6G 0 lvm /var/rootdirs/scratch > sr0 11:0 1 1024M 0 rom > nvme0n1 259:0 0 50G 0 disk > > > I see these kind of errors in daemon log: > > 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: notice > /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy > directory /var/run/, updating /var/run/kubernetes <86><92> > /run/kubernetes; please update the tmpfiles.d/ drop-in file accordingly. > 2023-03-23T13:58:51.020 localhost systemd-modules-load[438]: info Inserted > module 'ib_cm' > 2023-03-23T13:58:51.020 localhost systemd-modules-load[438]: info Inserted > module 'ib_ucm' > 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to > parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to > parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to > parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.020 localhost systemd-tmpfiles[454]: warning Failed to > parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed to > parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed to > parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.021 localhost systemd-tmpfiles[454]: warning Failed to > parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or > directory. Ignoring > 2023-03-23T13:58:51.021 localhost systemd[1]: info Finished Create Static > Device Nodes in /dev. > 2023-03-23T13:58:51.021 localhost systemd[1]: info Starting Rule-based > Manager for Device Events and Files... > 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted > module 'ib_uverbs' > 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted > module 'iw_cm' > 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted > module 'rdma_cm' > 2023-03-23T13:58:51.021 localhost systemd-modules-load[438]: info Inserted > module 'rdma_ucm' > 2023-03-23T13:58:51.021 localhost systemd-udevd[459]: err > /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:8 Unknown user 'ceph', ignoring > 2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err > /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:8 Unknown group 'ceph', ignoring > 2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err > /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:13 Unknown user 'ceph', ignoring > 2023-03-23T13:58:51.022 localhost systemd-udevd[459]: err > /usr/lib/udev/rules.d/95-ceph-osd-lvm.rules:13 Unknown group 'ceph', > ignoring > 2023-03-23T13:58:51.022 localhost systemd[1]: info Started Rule-based > Manager for Device Events and Files. > > 2023-03-23T13:58:51.022 localhost systemd[1]: info Starting Apply Kernel > Variables... > 2023-03-23T13:58:51.022 localhost systemd-sysctl[482]: info Couldn't write > '20' to 'fs/negative-dentry-limit', ignoring: No such file or directory > 2023-03-23T13:58:51.022 localhost systemd[1]: info Finished Apply Kernel > Variables. > > 2023-03-23T13:58:51.022 localhost systemd-udevd[474]: info Using interface > naming scheme 'vSTX7_0'. > 2023-03-23T13:58:51.022 localhost systemd-udevd[474]: info ethtool: > autonegotiation is unset or enabled, the speed and duplex are not writable. > 2023-03-23T13:58:51.022 localhost systemd-udevd[463]: info ethtool: > autonegotiation is unset or enabled, the speed and duplex are not writable. > > 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: notice > /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy > directory /var/run/, updating /var/run/kubernetes <86><92> > /run/kubernetes; please update the tmpfiles.d/ drop-in file accordingly. > > 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to > parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to > parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to > parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to > parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to > parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to > parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T13:58:51.024 localhost systemd-tmpfiles[760]: warning Failed to > parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or > directory. Ignoring > 2023-03-23T13:58:51.024 localhost systemd[1]: info Finished Create > Volatile Files and Directories. > > 2023-03-23T13:58:51.025 localhost systemd[1]: info Started Kubernetes > Kubelet Server. > 2023-03-23T13:58:51.025 localhost polkitd[801]: info started daemon > version 0.105 using authority implementation `local' version `0.105' > 2023-03-23T13:58:51.025 localhost systemd[863]: info kubelet.service: > Failed to locate executable /usr/bin/kubelet: No such file or directory > 2023-03-23T13:58:51.025 localhost systemd[863]: err kubelet.service: > Failed at step EXEC spawning /usr/bin/kubelet: No such file or directory > 2023-03-23T13:58:51.025 localhost systemd[1]: info Starting Kubernetes > Isolated CPU Plugin Daemon... > > 2023-03-23T13:59:10.551 localhost controller_config[1459]: info Pausing > for 5 seconds... > 2023-03-23T13:59:14.605 localhost lldpd[998]: info removal request for > address of fe80::a00:27ff:fe85:8445%2, but no knowledge of it > 2023-03-23T13:59:14.842 localhost lldpd[998]: info removal request for > address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it > 2023-03-23T13:59:15.565 localhost systemd[1]: notice > controllerconfig.service: Main process exited, code=exited, status=1/FAILURE > 2023-03-23T13:59:15.565 localhost systemd[1]: warning > controllerconfig.service: Failed with result 'exit-code'. > 2023-03-23T13:59:15.579 localhost systemd[1]: info Finished General > StarlingX config gate. > 2023-03-23T13:59:15.581 localhost systemd[1]: info Starting StarlingX > Maintenance Filesystem Monitor... > 2023-03-23T13:59:15.583 localhost systemd[1]: info Started Getty on tty1. > 2023-03-23T13:59:15.584 localhost systemd[1]: info Reached target Login > Prompts. > 2023-03-23T13:59:15.586 localhost systemd[1]: info Starting StarlingX > Maintenance Worker Goenable Ready... > 2023-03-23T13:59:15.587 localhost systemd[1]: info Starting StarlingX > Maintenance Goenable Ready... > 2023-03-23T13:59:15.588 localhost systemd[1]: info Starting StarlingX > Maintenance Heartbeat Client... > 2023-03-23T13:59:15.589 localhost systemd[1]: info Starting Starling-X > Maintenance Link Monitor... > 2023-03-23T13:59:15.590 localhost systemd[1]: info Starting StarlingX > Maintenance Alarm Handler Client... > 2023-03-23T13:59:15.591 localhost systemd[1]: info Starting StarlingX > Maintenance Logger... > 2023-03-23T13:59:15.593 localhost systemd[1]: info Starting StarlingX > Pxeboot Feed Refresh... > 2023-03-23T13:59:15.594 localhost systemd[1]: info Starting Service > Management Unit... > 2023-03-23T13:59:15.597 localhost systemd[1]: info Finished StarlingX > Maintenance Worker Goenable Ready. > 2023-03-23T13:59:15.610 localhost goenabled[1504]: info Goenabled Ready: [ > OK ] > 2023-03-23T13:59:15.610 localhost systemd[1]: info Finished StarlingX > Maintenance Goenable Ready. > 2023-03-23T13:59:15.630 localhost lmon[1507]: info Starting lmond: OK > 2023-03-23T13:59:15.630 localhost systemd[1]: info lmon.service: Can't > open PID file /run/lmond.pid (yet?) after start: Operation not permitted > 2023-03-23T13:59:15.633 localhost mtclog[1509]: info Starting mtclogd: OK > 2023-03-23T13:59:15.634 localhost hbsClient[1506]: info Starting > hbsClient: OK > 2023-03-23T13:59:15.635 localhost fsmon[1501]: info Starting fsmond: OK > 2023-03-23T13:59:15.636 localhost systemd[1]: info mtclog.service: Can't > open PID file /run/mtclogd.pid (yet?) after start: Operation not permitted > 2023-03-23T13:59:15.636 localhost systemd[1]: info hbsClient.service: > Can't open PID file /run/hbsClient.pid (yet?) after start: Operation not > permitted > 2023-03-23T13:59:15.637 localhost systemd[1]: info fsmon.service: Can't > open PID file /run/fsmond.pid (yet?) after start: Operation not permitted > 2023-03-23T13:59:15.637 localhost mtcalarm[1508]: info Starting mtcalarmd: > OK > 2023-03-23T13:59:15.639 localhost systemd[1]: info mtcalarm.service: Can't > open PID file /run/mtcalarmd.pid (yet?) after start: Operation not permitted > > > 2023-03-23T14:03:53.832 localhost affine-tasks.sh(1218): info : Recovery > wait, elapsed 301 seconds. Reason: k8s-infra not configured > 2023-03-23T14:08:00.073 localhost avahi-daemon[796]: info Joining mDNS > multicast group on interface enp0s3.IPv4 with address 10.0.1.3. > 2023-03-23T14:08:00.074 localhost avahi-daemon[796]: info New relevant > interface enp0s3.IPv4 for mDNS. > 2023-03-23T14:08:00.074 localhost avahi-daemon[796]: info Registering new > address record for 10.0.1.3 on enp0s3.IPv4. > 2023-03-23T14:08:01.560 localhost avahi-daemon[796]: info Joining mDNS > multicast group on interface enp0s3.IPv6 with address > fe80::a00:27ff:fe85:8445. > 2023-03-23T14:08:01.561 localhost avahi-daemon[796]: info New relevant > interface enp0s3.IPv6 for mDNS. > 2023-03-23T14:08:01.561 localhost avahi-daemon[796]: info Registering new > address record for fe80::a00:27ff:fe85:8445 on enp0s3.*. > 2023-03-23T14:08:54.436 localhost affine-tasks.sh(1218): info : Recovery > wait, elapsed 602 seconds. Reason: k8s-infra not configured > 2023-03-23T14:11:06.065 localhost lldpd[998]: info removal request for > address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it > 2023-03-23T14:13:37.869 localhost systemd[1]: info Starting Cleanup of > Temporary Directories... > 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: notice > /etc/tmpfiles.d/kubernetes.conf:1: Line references path below legacy > directory /var/run/, updating /var/run/kubernetes <86><92> > /run/kubernetes; please update the tmpfiles.d/ drop-in file accor > dingly. > 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed > to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed > to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed > to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed > to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed > to parse ACL "d:group:sys_protected:r-x,d:group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed > to parse ACL "group:sys_protected:r-x,group:wheel:r-x": No such file or > directory. Ignoring > 2023-03-23T14:13:37.874 localhost systemd-tmpfiles[2092]: warning Failed > to parse ACL "group:sys_protected:r--,group:wheel:r--": No such file or > directory. Ignoring > 2023-03-23T14:13:37.912 localhost systemd[1]: info > systemd-tmpfiles-clean.service: Succeeded. > 2023-03-23T14:13:37.912 localhost systemd[1]: info Finished Cleanup of > Temporary Directories. > 2023-03-23T14:13:55.158 localhost affine-tasks.sh(1218): info : Recovery > wait, elapsed 903 seconds. Reason: k8s-infra not configured > 2023-03-23T14:18:55.627 localhost affine-tasks.sh(1218): info : Recovery > wait, elapsed 1203 seconds. Reason: k8s-infra not configured > 2023-03-23T14:22:57.658 localhost lldpd[998]: info removal request for > address of fe80::a00:27ff:fe4f:69bd%3, but no knowledge of it > > On Thu, Mar 23, 2023 at 3:06?PM Pereira, Douglas < > Douglas.Pereira at windriver.com> wrote: > >> Hi Giedrius, >> >> >> >> Have you tried increasing the VM memory? The documentation >> >> suggests 20480 MB for the AIO-SX configuration and you are using only 16GB. >> >> >> >> Regards, >> >> Doug >> >> >> >> *From:* voipas >> *Sent:* Wednesday, March 22, 2023 2:40 PM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] failing to deploy Simplex Starlingx AIO >> on VirtualBox >> >> >> >> *CAUTION: This email comes from a non Wind River email account!* >> Do not click links or open attachments unless you recognize the sender >> and know the content is safe. >> >> Hello colleagues, >> >> >> >> I need your support here. Simplex Starlingx AIO installation fails on >> first steps... >> >> - After installation and reboot I see that kubelet.service - >> Kubernetes Kubelet Server Failed. Not sure if it is normal or not at this >> phase... See more details below >> - Bootstrapping failed - Failed to provision initial system >> configuration. >> >> >> >> I'm trying to install Starlingx on my Intel Nuc box (i5, 64 GB RAM, 2 TB >> disk) with Ubuntu Desktop OS. VirtualBox version 6.1 >> >> >> >> VM configuration: >> >> - 8 vCPU (VT-X/AMD-V, Nested Paging, PAE/NX, KVM Paravirtualization) >> - 16 GB RAM >> - Storage: >> >> >> - Controller SATA 520 GB >> - Controller NVMe 20 GB >> >> >> - Network: >> >> >> - Intel Pro/1000 MT Desktop - OAM network (internet accessible) >> - >> - Intel Pro/1000 MT Desktop - Data network (internet accessible) >> >> >> >> I used latest ISO image: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/release/8.0.0/debian/monolithic/outputs/iso/starlingx-intel-x86-64-cd.iso >> >> >> >> >> So I wonder what is wrong with this deployment , am I missing something? >> >> >> >> >> >> *Kubelet failure (*/var/log/daemon.log*):* >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *2023-03-21T19:35:19.876 localhost systemd[1]: info Started StarlingX >> Affine Tasks. 2023-03-21T19:35:19.968 localhost iscsid: info iSCSI daemon >> with pid=912 started! 2023-03-21T19:35:20.057 localhost >> affine-tasks.sh(1211): info : Starting. 2023-03-21T19:35:20.058 localhost >> affine-tasks.sh(1211): info : Affine all tasks, CPUS: 0-7; online=0-7 >> (0xff), isol=, nonisol=0-7 (0xff) 2023-03-21T19:35:20.128 localhost >> affine-tasks.sh(1211): info : Affined 58 processes to all cores. >> 2023-03-21T19:35:20.302 localhost systemd[1]: info kubelet.service: >> Scheduled restart job, restart counter is at 5. 2023-03-21T19:35:20.303 >> localhost systemd[1]: info Stopping Kubernetes Isolated CPU Plugin >> Daemon... 2023-03-21T19:35:20.304 localhost systemd[1]: info >> isolcpu_plugin.service: Succeeded. 2023-03-21T19:35:20.305 localhost >> systemd[1]: info Stopped Kubernetes Isolated CPU Plugin Daemon. >> 2023-03-21T19:35:20.306 localhost systemd[1]: info Stopped Kubernetes >> Kubelet Server. 2023-03-21T19:35:20.306 localhost systemd[1]: warning >> kubelet.service: Start request repeated too quickly. >> 2023-03-21T19:35:20.306 localhost systemd[1]: warning kubelet.service: >> Failed with result 'exit-code'. 2023-03-21T19:35:20.306 localhost >> systemd[1]: err Failed to start Kubernetes Kubelet Server. >> 2023-03-21T19:35:20.308 localhost systemd[1]: warning Dependency failed for >> Kubernetes Isolated CPU Plugin Daemon. 2023-03-21T19:35:20.309 localhost >> systemd[1]: notice isolcpu_plugin.service: Job isolcpu_plugin.service/start >> failed with result 'dependency'. 2023-03-21T19:35:20.514 localhost >> sysinv-agent[1012]: info /etc/init.d/sysinv-agent: line 114: [: =: unary >> operator expected* >> >> >> >> >> >> *Bootstrap failure:* >> >> >> >> >> >> >> >> *TASK [bootstrap/persist-config : Fail if populate config script throws >> an exception] >> ********************************************************************************************************************************************************************************* >> Wednesday 22 March 2023 17:29:05 +0000 (0:00:00.024) 0:01:40.002 >> ******* fatal: [localhost]: FAILED! => changed=false msg: Failed to >> provision initial system configuration. PLAY RECAP >> *********************************************************************************************************************************************************************************************************************************************************** >> localhost : ok=180 changed=45 unreachable=0 failed=1 >> skipped=235 rescued=0 ignored=0* >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *2023-03-22 17:29:05,960 p=323063 u=sysadmin n=ansible | TASK >> [bootstrap/persist-config : debug] >> ********************************************************************************************************************************************************************** >> ******************************************************** 2023-03-22 >> 17:29:05,960 p=323063 u=sysadmin n=ansible | Wednesday 22 March 2023 >> 17:29:05 +0000 (0:00:06.932) 0:01:39.978 ******* 2023-03-22 >> 17:29:05,981 p=323063 u=sysadmin n=ansible | ok: [localhost] => >> populate_result: changed: true failed: false >> failed_when_result: false msg: non-zero return code rc: 1 >> stderr: |- Traceback (most recent call last): File >> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", >> line 1327, in populate_service_parameter_config(client) >> File >> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", >> line 1046, in populate_service_parameter_config >> populate_docker_kube_config(client) File >> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", >> line 838, in populate_docker_kube_config >> client.sysinv.service_parameter.delete(parameter.uuid) File >> "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line >> 45, in delete return self._delete(self._path(parameter_id)) >> File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", line 95, >> in _delete self.api.raw_request('DELETE', url) File >> "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 224, in >> raw_request return self._http_request(url, method, **kwargs) >> File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line >> 186, in _http_request raise exceptions.from_response( >> cgtsclient.exc.HTTPInternalServerError: 'int' object is not callable >> stderr_lines: - 'Traceback (most recent call last):' - ' File >> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", >> line 1327, in ' - ' >> populate_service_parameter_config(client)' - ' File >> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", >> line 1046, in populate_service_parameter_config' - ' >> populate_docker_kube_config(client)' - ' File >> "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1679506139.0395112-327066-74144393708544/populate_initial_config.py", >> line 838, in populate_docker_kube_config' - ' >> client.sysinv.service_parameter.delete(parameter.uuid)' - ' File >> "/usr/lib/python3/dist-packages/cgtsclient/v1/service_parameter.py", line >> 45, in delete' - ' return self._delete(self._path(parameter_id))' >> - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/base.py", >> line 95, in _delete' - ' self.api.raw_request(''DELETE'', url)' >> - ' File "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line >> 224, in raw_request' - ' return self._http_request(url, method, >> **kwargs)' - ' File >> "/usr/lib/python3/dist-packages/cgtsclient/common/http.py", line 186, in >> _http_request' - ' raise exceptions.from_response(' - >> 'cgtsclient.exc.HTTPInternalServerError: ''int'' object is not callable' >> stdout: |- Updating system config... System config completed. >> Deleting network, routes, addresses, and address pool for network >> mgmt... Updating management network... Deleting network, >> routes, addresses, and address pool for network pxeboot... Updating >> pxeboot network... Deleting network, routes, addresses, and address >> pool for network oam... Updating oam network... Deleting >> network, routes, addresses, and address pool for network multicast... >> Updating multicast network... Deleting network, routes, addresses, >> and address pool for network cluster-host... Updating cluster host >> network... Deleting network, routes, addresses, and address pool for >> network cluster-pod... Updating cluster pod network... Deleting >> network, routes, addresses, and address pool for network cluster-service... >> Updating cluster service network... Network config completed. >> Populating/Updating DNS config... DNS config completed.* >> >> >> >> Thanks in advance >> >> >> >> -- >> >> Best Regards, >> Giedrius >> > > > -- > Best Regards, > Giedrius > -- Best Regards, Giedrius -------------- next part -------------- An HTML attachment was scrubbed... URL: