From Bill.Zvonar at windriver.com Tue Jun 1 16:42:46 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 1 Jun 2021 16:42:46 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 2, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community calls tomorrow. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210602T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Greg.Waines at windriver.com Tue Jun 1 17:19:21 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 1 Jun 2021 17:19:21 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 2, 2021) In-Reply-To: References: Message-ID: StarlingX, I have to cancel the TSC portion of the community call this week, I have a conflicting appointment. We will address TSC members and StarlingX R6 Feature Candidates topics at next week's TSC/Community call (June 9). Thanks, Greg. -----Original Message----- From: Zvonar, Bill Sent: Tuesday, June 1, 2021 12:43 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Community (& TSC) Call (June 2, 2021) Hi all, reminder of the weekly TSC/Community calls tomorrow. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210602T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ankush.Rai at commscope.com Tue Jun 1 17:43:51 2021 From: Ankush.Rai at commscope.com (Rai, Ankush) Date: Tue, 1 Jun 2021 17:43:51 +0000 Subject: [Starlingx-discuss] Need info regarding this alarm License Key alarm Message-ID: License key is not installed; a valid license key is required for operation. What does this alarm indicates ? Do we need License to use starlingx ? Thanks, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jun 1 17:49:58 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 1 Jun 2021 19:49:58 +0200 Subject: [Starlingx-discuss] stx.5.0 Release milestone declared In-Reply-To: References: Message-ID: <6CEFF8FC-12DB-4FC0-A0D3-CFAB638C4FA0@gmail.com> Hi StarlingX Community, I would like to say thank you and congratulations to all of you who participated in the StarlingX 5.0 release cycle! It feels like we launched the project yesterday and we already have the 5th release out! It is a great achievement that you all put in a lot of hard work so I hope you can take a breath and bit of rest before diving in to the 6.0 features. The press release is coming out tomorrow along with a post on the StarlingX blog to announce this milestone! Thanks and Best Regards, Ildikó > On May 31, 2021, at 20:53, Khalil, Ghada wrote: > > Hello all, > This email announces that the stx.5.0 Release milestone has been achieved as of May 27, 2021. StarlingX release 5.0 is officially delivered. > > The ISO is available on CENGN at: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/5.0.0/ > Release Notes are on starlingx.io at: https://docs.starlingx.io/releasenotes/r5_release.html > > Thank you to everyone in the Community - from development, test and documentation - for all of your hard work in delivering this release. > Congratulations everyone! > > Regards, > Ghada > On behalf of the StarlingX Release team > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From jack at jento.io Tue Jun 1 17:54:32 2021 From: jack at jento.io (Jack Morgan) Date: Tue, 1 Jun 2021 10:54:32 -0700 Subject: [Starlingx-discuss] Need info regarding this alarm License Key alarm In-Reply-To: References: Message-ID: Ankush, I can't answer the need a license question, but you can read what alarm message mean in the docs[0]. [0] https://docs.starlingx.io/fault-mgmt/kubernetes/index.html#alarm-messages On 6/1/21 10:43, Rai, Ankush wrote: > > License key is not installed; a valid license key is required for > operation. > > What does this alarm indicates ? Do we need License to use starlingx ? > > Thanks, > > Ankush > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Tue Jun 1 20:02:58 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 1 Jun 2021 20:02:58 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210531T230332Z Message-ID: Sanity Test from 2021-May-31 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210531T230332Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210531T230332Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From build.starlingx at gmail.com Wed Jun 2 04:37:15 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 2 Jun 2021 00:37:15 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 938 - Failure! Message-ID: <1389537266.118.1622608637136.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 938 Status: Failure Timestamp: 20210602T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210602T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From Greg.Waines at windriver.com Wed Jun 2 10:19:15 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 2 Jun 2021 10:19:15 +0000 Subject: [Starlingx-discuss] Need info regarding this alarm License Key alarm In-Reply-To: References: Message-ID: No this is a bug. Can you raise a starlingx launchpad bug ? Greg. From: Rai, Ankush Sent: Tuesday, June 1, 2021 1:44 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Need info regarding this alarm License Key alarm [Please note: This e-mail is from an EXTERNAL e-mail address] License key is not installed; a valid license key is required for operation. What does this alarm indicates ? Do we need License to use starlingx ? Thanks, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jun 2 14:40:06 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 2 Jun 2021 14:40:06 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 2, 2021) In-Reply-To: References: Message-ID: >From today's call... * Standing Topics * Build/Sanity * generally good of late, one network glitch impacted the build last night * Frank asked how the sanities have become green of late * Alex said he increased the time that the testcase waits for the controller to come up, * which seems to have given enough time now to avoid the issue that was seen previously * in https://bugs.launchpad.net/starlingx/+bug/1918420 * Gerrit Reviews in Need of Attention * patch review https://review.opendev.org/c/starlingx/rook-ceph/+/783584 * Topics for this Week * Intel effort ramping down * Development * Development for 6.0 - Separate CA for ETCD * On-demand bug fixing on OpenStack integration, K8s/container, storage projects * the dev contacts will be Austin & Mingyuan * Sanity * initial proposal was for a Daily sanity for one configuration: Bare Metal Standard * as discussed in the release team meeting, the updated proposal is to cycle through the set of configurations daily, like so: * Baremetal Standard: Monday * Baremetal Storage: Tuesday * Baremetal AIO-SX: Wednesday * Baremetal AIO-DX: Thursday * Virtual Standard: Friday * Virtual AIO-SX: Saturday * Virtual AIO-DX: Sunday * Virtual Storage: Abandon * ARs from Previous Meetings * no updates this week * Open Requests for Help * none this week * Build Matters (if required) * nothing this week -----Original Message----- From: Zvonar, Bill Sent: Tuesday, June 1, 2021 12:43 PM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (June 2, 2021) Hi all, reminder of the weekly TSC/Community calls tomorrow. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210602T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From alexandru.dimofte at intel.com Wed Jun 2 19:57:13 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 2 Jun 2021 19:57:13 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210602T013334Z Message-ID: Sanity Test from 2021-June-01 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210602T013334Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210602T013334Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 8408 bytes Desc: image003.png URL: From maryx.camp at intel.com Wed Jun 2 20:36:01 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 2 Jun 2021 20:36:01 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 02-Jun-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 02-Jun-21 All -- reviews merged since last meeting: 20 All -- bug status -- 17 total - team agrees to defer all low priority LP until the upstreaming effort is completed. Status/questions/opens Some big changes are coming - will impact many files - containers is one and terminology is another. Possibly do a "docs freeze" to avoid merge conflicts? Currently the WR team is targeting 9 June for their docs freeze. Possibly stop all reviews on Monday 7 June? WR team will discuss their options. R5 Retrospective topics parking lot (see etherpad) - everyone can add items as they think of them or can email them to Mary. Team agreed to add Juanita as another core reviewer with +2/+1 permissions. Need to remove Bruce. AR Mary will update the starlingx-docs-core list. [DONE after meeting] Frequent merge conflicts: since so many of us are updating docs, maybe we need to start making smaller reviews (smaller # of files). Good practice to delete your old local branches after the reviews are merged: git branch -D << use uppercase D to force delete Cherry picking into R5 branch - open issues Discussion about whether we have time to do this now or not? We are unclear on the best process and the order in which they need to be done. Suggestion for a separate cherry pick session after we hear from Scott. From alexandru.dimofte at intel.com Thu Jun 3 16:18:07 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 3 Jun 2021 16:18:07 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210603T015820Z Message-ID: Sanity Test from 2021-June-03 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210603T015820Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210603T015820Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 8408 bytes Desc: image003.png URL: From Ghada.Khalil at windriver.com Fri Jun 4 01:08:10 2021 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 4 Jun 2021 01:08:10 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - June 2/2021 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases stx.5.0 - stx.5.0 Release Milestone has been declared on May 27/2021 - Announcement on stx-discuss: http://lists.starlingx.io/pipermail/starlingx-discuss/2021-May/011546.html - stx.5.0 Release Spreadsheet: https://docs.google.com/spreadsheets/d/1JbOQELqXG_GDoP1jo6YoRDytpTkJEs89TVimzvrqc_A/edit#gid=1107209846 - stx.5.0 Docs - The majority of the docs are in good shape. The Install Guides, Release Notes, landing page are up-to-date. - There may be a few cherrypicks still required for the r/stx.5.0 release branch. - What is the anticipated date for completing the cherrypicks? - Scott: Do we want to re-tag again once the docs cherrypicks are done? Action: Mary to communicate date - stx.5.0 StoryBoard - Still some stories are left open that need some house-keeping - https://storyboard.openstack.org/#!/story/list?status=active&project_group_id=86&tags=stx.5.0 - Ghada to send email to mailing list asking primes to close them off as needed stx.6.0 - Release Planning Spreadsheet: https://docs.google.com/spreadsheets/d/13p0BMlBgJXUVForOFsblAJq9jA1-FMBlmhV5TIc70IE/edit#gid=1107209846 - We are in the process of collecting feature proposals - Community members are encouraged to propose/add features in the next few weeks - Reduced Sanity Proposal - Will test only configuration per day - The original proposal by Nick is to run Baremetal Standard - A proposed alterntaive would be to rotate between the different configurations. - This was discussed as the preferred option by the release team meeting attendees. - Baremetal Standard: Monday - Baremetal Storage: Tuesday - Baremetal AIO-SX: Wednesday - Baremetal AIO-DX: Thursday - Virtual Standard: Friday - Virtual AIO-SX: Saturday - Virtual AIO-DX: Sunday - Virtual Storage: Abandon - Note: From Scott: We may need to re-consider how we label green loads on CENGN From Ghada.Khalil at windriver.com Fri Jun 4 01:18:24 2021 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 4 Jun 2021 01:18:24 +0000 Subject: [Starlingx-discuss] FW: Action Required: Closing off StarlingX stx.5.0 storyboards In-Reply-To: References: Message-ID: Forwarding to the stx-discuss mailing list for visibility. From: Khalil, Ghada Sent: Thursday, June 03, 2021 9:17 PM To: Qi, Mingyuan ; Camp, MaryX ; Subramanian, Ramaswamy ; Wensley, Barton ; Stone, Ronald ; Adil Assakkali, Mohamed ; Pereira, Douglas ; Miller, Frank ; Mukherjee, Sanjay K ; Dobro, Gustavo ; Sun, Austin ; Jascanu, Nicolae Cc: Zvonar, Bill Subject: Action Required: Closing off StarlingX stx.5.0 storyboards Hello all, You are receiving this email because you are the dev or doc prime for one or more stx.5.0 storyboards that are still open. With the declaration of the release milestone, please take the time to close off the stories by the end of this month. If there is a reason to continue to keep a story open (example: the story will be used for additional work in the next release), please let me know or add the "stx.6.0" tag. Note: Some stories still have active doc updates which I expect should be closed shortly within a couple of weeks. Others have possibly outdated or invalid tasks that need to be cleaned up (marked as invalid or deleted). I've highlighted what is remaining for each one below. Regards, Ghada ? Stx.5.0 Active Stories: * https://storyboard.openstack.org/#!/story/2008129 - Edgeworker Management Phase One o Prime: Mingyuan Qi o Remaining: Docs / Mary Camp * https://storyboard.openstack.org/#!/story/2008055 - Upgrade Framework Support o Prime: Bart Wensley / Ram Subramanian o Remaining: Docs / Ron Stone + Acceptance / Bart Wensley * https://storyboard.openstack.org/#!/story/2008529 - PTP Notification o Prime: Ghada Khalil o Remaining: Docs / Adil Mohamed * https://storyboard.openstack.org/#!/story/2008613 - Add support to NFS backend on stx-openstack cinder-backup o Prime: Douglas Pereira o Remaining: Docs / Adil Mohamed * https://storyboard.openstack.org/#!/story/2008760 - Support for application cpu isolation o Prime: Frank Miller o Remaining: To Do Dev tasks * https://storyboard.openstack.org/#!/story/2008117 - Integrate SDO into the StarlingXIntegrate SDO into the StarlingX o Prime: Sanjay Mukherjee o Remaining: To Do Dev tasks * https://storyboard.openstack.org/#!/story/2008162 - CephFS RWX Support in Host-based Ceph o Prime: Frank Miller o Remaining: Docs / Juanita Balaraj * https://storyboard.openstack.org/#!/story/2007267 - Distributed Cloud Sacling o Prime: Bart Wensley / Ram Subramanian o Remaining: To Do Dev tasks * https://storyboard.openstack.org/#!/story/2008132 - SNMPv3 Support o Prime: Gustavo Dobro o Remaining: To Do Test tasks + Acceptance / Greg Waines * https://storyboard.openstack.org/#!/story/2005527 - Ceph Containerization in StarlingX o Prime: Austin Sun o Remaining: To Do Dev tasks * https://storyboard.openstack.org/#!/story/2007472- Test suite conversion from Robot to Pytest o Prime: Nick Jascanu o Remaining: To Do tasks -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Fri Jun 4 16:03:34 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 4 Jun 2021 16:03:34 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210604T013410Z Message-ID: Sanity Test from 2021-June-04 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210604T013410Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210604T013410Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8408 bytes Desc: image001.png URL: From amy at demarco.com Fri Jun 4 21:00:12 2021 From: amy at demarco.com (Amy Marrich) Date: Fri, 4 Jun 2021 16:00:12 -0500 Subject: [Starlingx-discuss] [Diversity] Diversity and Inclusion Meeting Reminder - OFTC Message-ID: The Diversity & Inclusion WG invites members of all OIF projects to attend our next meeting Monday June 7th, at 17:00 UTC in the #openinfra-diversity channel on OFTC. The agenda can be found at https://etherpad.openstack.org/p/ diversity-wg-agenda. Please feel free to add any topics you wish to discuss at the meeting. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Sun Jun 6 17:37:53 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sun, 6 Jun 2021 17:37:53 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210605T023117Z Message-ID: Sanity Test from 2021-June-05 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210605T023117Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210605T023117Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From Alexander.Williams at commscope.com Mon Jun 7 16:22:54 2021 From: Alexander.Williams at commscope.com (Williams, Alexander) Date: Mon, 7 Jun 2021 16:22:54 +0000 Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) Message-ID: Hi all, My current understanding is that whenever adding a host (esp. controller-1) using the system host-update command after a PXE boot, StarlingX will install the base image and then perform the configuration steps to make it a controller, worker, etc., overwriting anything that was previously installed on the machine. 1. Is my understanding of host-update correct, or am I missing something important here? Does host-add do the same thing, but gets run on controller-0 before booting and not after? 2. If I install the StarlingX image on a server that will become controller-1, is there any way to add it to the host list of controller-0 and configure its personality without the server reinstalling StarlingX? Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jun 8 01:19:10 2021 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 8 Jun 2021 01:19:10 +0000 Subject: [Starlingx-discuss] Cancel StarlingX Distro-OpenStack: Bi-weekly Project Meeting -- 06/08 Message-ID: Hi All: Cancel today openstack distro meeting as no major topic bugs will discuss offline . Thanks. BR Austin Sun -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Tue Jun 8 11:16:16 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 8 Jun 2021 16:46:16 +0530 Subject: [Starlingx-discuss] StarlingX Release 5 and Nvidia GPUs Message-ID: Hi, I need to provision VMs with Nvidia GPU. Do we need to manually install Nvidia drivers as patch or the drivers already available in worker nodes? Is there any documentation about Nvidia support? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jun 8 01:18:35 2021 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 8 Jun 2021 01:18:35 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Distro-OpenStack: Bi-weekly Project Meeting(Summer Time) Message-ID: Hi folks, This is a new series of bi-weekly project meeting on StarlingX Distro-OpenStack. Your participation to this meeting and/or other offline contribution by all means are highly appreciated! Project Team Etherpad: https://etherpad.openstack.org/p/stx-distro-openstack-meetings The Summer Time Slot for this meeting : CST: 9:00 PM (China, Shanghai ) PST: 7:00 AM (US West , US, Oregon) EST: 9:00 AM (East Canada , Canada Ottawa) Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3561 bytes Desc: not available URL: From jack at jento.io Tue Jun 8 16:10:11 2021 From: jack at jento.io (Jack Morgan) Date: Tue, 8 Jun 2021 09:10:11 -0700 Subject: [Starlingx-discuss] StarlingX Release 5 and Nvidia GPUs In-Reply-To: References: Message-ID: <9f280dc6-e069-f0ab-df22-80d8f02f101c@jento.io> Danishka, On 6/8/21 04:16, open infra wrote: > Hi, > > I need to provision VMs with Nvidia GPU. > Do we need to manually install Nvidia drivers as patch or the drivers > already available in worker nodes? > > Is there any documentation about Nvidia support? You take a look at the hardware acceleration device sections under node management. https://docs.starlingx.io/node_management/kubernetes/index.html#hardware-acceleration-devices Thanks, Jack Morgan From alexandru.dimofte at intel.com Tue Jun 8 18:46:34 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 8 Jun 2021 18:46:34 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210607T144349Z Message-ID: Sanity Test from 2021-June-07 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210607T144349Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210607T144349Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From alexandru.dimofte at intel.com Tue Jun 8 18:49:35 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 8 Jun 2021 18:49:35 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210607T230305Z Message-ID: Sanity Test from 2021-June-07 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210607T230305Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210607T230305Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From Bill.Zvonar at windriver.com Wed Jun 9 12:05:33 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 9 Jun 2021 12:05:33 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 9, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210609T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From alexandru.dimofte at intel.com Wed Jun 9 14:04:02 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 9 Jun 2021 14:04:02 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210609T020429Z Message-ID: Sanity Test from 2021-June-09 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210609T020429Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210609T020429Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From Bill.Zvonar at windriver.com Wed Jun 9 14:55:20 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 9 Jun 2021 14:55:20 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 9, 2021) In-Reply-To: References: Message-ID: >From today's call... * Standing Topics * Build/Sanity * sanities have been all green since last week * Gerrit Reviews in Need of Attention * none this week * Topics for this Week * none this week - the group was all talked out from the TSC * ARs from Previous Meetings * none, Ildiko still trying to get traction with Docker Hub * Open Requests for Help * Adding hosts (Bare Metal AIO Duplex) * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011564.html * Greg will respond * Build Matters (if required) * none this week -----Original Message----- From: Zvonar, Bill Sent: Wednesday, June 9, 2021 8:06 AM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (June 9, 2021) Hi all, reminder of the weekly TSC/Community coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210609T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Ankush.Rai at commscope.com Wed Jun 9 16:04:00 2021 From: Ankush.Rai at commscope.com (Rai, Ankush) Date: Wed, 9 Jun 2021 16:04:00 +0000 Subject: [Starlingx-discuss] CPU-isolation support Message-ID: Hi, Starlingx 4.0 does not support CPU-isolation. Does this support enabled in starlingx 5.0 or 6.0? Please let us know if any work-around available for this issue in 4.0? Regards, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From maryx.camp at intel.com Thu Jun 10 00:33:26 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 10 Jun 2021 00:33:26 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 09-Jun-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 09-Jun-21  All -- reviews merged since last meeting:  22 All -- bug status -- 17 total - team agrees to defer all low priority LP until the upstreaming effort is completed.  Status/questions/opens Several reviewers are pressing for changes in commit messages - ie, where is the task/story info, why are they so long?  Unclear why they are invested in the commit messages. Do we need to push back on this?  Create "template" for commit messages for future updates. Add to parking lot for retrospective. How to address the gap between WR Jira tickets and LP - Story/Task info, without having to replicate all the info in 2 systems.     Is there a reason why we can't reference Jira numbers in the commit messages? This is a question for Greg.  From fungi at yuggoth.org Thu Jun 10 01:19:19 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 10 Jun 2021 01:19:19 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 09-Jun-21 In-Reply-To: References: Message-ID: <20210610011918.qh3owpbdhdrdy4xl@yuggoth.org> On 2021-06-10 00:33:26 +0000 (+0000), Camp, MaryX wrote: [...] > How to address the gap between WR Jira tickets and LP - Story/Task > info, without having to replicate all the info in 2 systems.     > Is there a reason why we can't reference Jira numbers in the > commit messages? This is a question for Greg.  I'm not Greg, but still curious. If a non-WR-employed contributor, aspiring contributor, user, et cetera is reviewing a change or looking at a commit in Git history which references a Jira ticket, how do they obtain the content of that ticket so they have proper context? Are those tickets publicly accessible? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From maryx.camp at intel.com Thu Jun 10 02:55:33 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 10 Jun 2021 02:55:33 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 09-Jun-21 In-Reply-To: <20210610011918.qh3owpbdhdrdy4xl@yuggoth.org> References: <20210610011918.qh3owpbdhdrdy4xl@yuggoth.org> Message-ID: Hi Jeremy, that is exactly the issue we were discussing. Sometimes the context is simply "a developer emailed the docs team to change this guide on line 23 from A to B." If the commit message has the necessary details about the change, could the Jira number be just an identifier? Or does having a Jira number send the message that a non-public system is being used? In the example above, the simple change is opened as a Jira ticket and then is copied into LP to be linked in the commit message. We were chatting about alternatives, because no one wants to update/track info in 2 systems if there's a more efficient method. Appreciate your input -- good to know someone reads the docs meeting notes 😊 thanks, Mary Camp Kelly Services Technical Writer | maryx.camp at intel.com -----Original Message----- From: Jeremy Stanley Sent: Wednesday, June 9, 2021 9:19 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [docs] [meeting] Docs team notes 09-Jun-21 On 2021-06-10 00:33:26 +0000 (+0000), Camp, MaryX wrote: [...] > How to address the gap between WR Jira tickets and LP - Story/Task > info, without having to replicate all the info in 2 systems. > Is there a reason why we can't reference Jira numbers in the commit > messages? This is a question for Greg. I'm not Greg, but still curious. If a non-WR-employed contributor, aspiring contributor, user, et cetera is reviewing a change or looking at a commit in Git history which references a Jira ticket, how do they obtain the content of that ticket so they have proper context? Are those tickets publicly accessible? -- Jeremy Stanley From anyrude10 at gmail.com Thu Jun 10 11:09:24 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Thu, 10 Jun 2021 16:39:24 +0530 Subject: [Starlingx-discuss] Openstack Application Upgrade Message-ID: Hi Team, We had deployed StarlingX 4.0 in Bare metal Standard with Controller Storage Mode. We have also installed Openstack application from StarlingX cengn mirror stx-openstack-1.0-49-centos-stable-versioned. On the top of Openstack, few workloads are also running. As per the link below, https://docs.starlingx.io/updates/kubernetes/upgrading-all-in-one-duplex-or-standard.html#upgrading-all-in-one-duplex-or-standard There is an option to upgrade starlingx software release from 4.0 to 5.0 without hampering the existing configuration and in a seperate link, openstack patch upgrade is also possible https://docs.starlingx.io/updates/openstack/apply-update-to-the-stx-openstack-application.html Is there any option to upgrade openstack release to the victoria or another version release after ussuri, once we do upgrade from 4.0 to 5.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu Jun 10 14:01:38 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 10 Jun 2021 14:01:38 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 09-Jun-21 In-Reply-To: References: Message-ID: > How to address the gap between WR Jira tickets and LP - Story/Task info, without having to replicate all the info in 2 systems. > Is there a reason why we can't reference Jira numbers in the commit messages? This is a question for Greg. For internal wind river types generating bugs on docs, we can remind people that if it is a problem that is in the upstream starlingx docs, that they should be generating a starlingx LP. And then ONLY use the LP; no internal WR JIRA would be created. If the internal WR JIRA is already created ... not sure we can do much other than duplicate the info in an LP. Greg. -----Original Message----- From: Camp, MaryX Sent: Wednesday, June 9, 2021 8:33 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 09-Jun-21 [Please note: This e-mail is from an EXTERNAL e-mail address] Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 09-Jun-21 All -- reviews merged since last meeting: 22 All -- bug status -- 17 total - team agrees to defer all low priority LP until the upstreaming effort is completed. Status/questions/opens Several reviewers are pressing for changes in commit messages - ie, where is the task/story info, why are they so long? Unclear why they are invested in the commit messages. Do we need to push back on this? Create "template" for commit messages for future updates. Add to parking lot for retrospective. How to address the gap between WR Jira tickets and LP - Story/Task info, without having to replicate all the info in 2 systems. Is there a reason why we can't reference Jira numbers in the commit messages? This is a question for Greg. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From fungi at yuggoth.org Thu Jun 10 16:49:19 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 10 Jun 2021 16:49:19 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 09-Jun-21 In-Reply-To: References: <20210610011918.qh3owpbdhdrdy4xl@yuggoth.org> Message-ID: <20210610164919.5rdmq4vjjcgrs6fc@yuggoth.org> On 2021-06-10 02:55:33 +0000 (+0000), Camp, MaryX wrote: [...] > Sometimes the context is simply "a developer emailed the docs team > to change this guide on line 23 from A to B." If the commit > message has the necessary details about the change, could the Jira > number be just an identifier? Or does having a Jira number send > the message that a non-public system is being used? > > In the example above, the simple change is opened as a Jira ticket > and then is copied into LP to be linked in the commit message. We > were chatting about alternatives, because no one wants to > update/track info in 2 systems if there's a more efficient method. [...] It's more about separating the proprietary product (Titanium Cloud, I guess?) defect tracking from the community open source development process which should, ideally, be independent of it. As the project hopefully grows and contributor affiliations diverge over time, what constitutes a bug in one company's product doesn't necessarily equal a bug in the open source software on which it's based. When working on a solution upstream, the contributors from WR will need to be able to articulate to the rest of the community what the perceived defect is anyway, for example by describing it in a new public bug report or within the commit message. Any time I look at a commit which references a private or otherwise undocumented tracking system, I find myself wondering what additional information is hidden there which I'm not sufficiently privileged to read. It doesn't benefit the general community for the open source project to include references to information in a private product tracking system, but does sow seeds of suspicion and imply that not all contributors are on equal footing when participating in the project (even if that's not really the case). On the other hand, many trackers (and I'm guessing Jira is no exception) have functionality for indicating when something is being addressed elsewhere, perhaps as a URL to the public StarlingX bug report. Something like that would still allow WR support staff and customers to directly follow the public work which is happening to address a particular problem, without unnecessarily embedding private references into it. If you do want to be able to reference private trackers for products within your public source code and commit metadata, you'll want to think hard about how you plan to support that workflow in the future when multiple companies have products based on that project, including how to disambiguate WR Jira IDs from OtherCompany Jira IDs without creating even more confusion. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Greg.Waines at windriver.com Thu Jun 10 17:31:25 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 10 Jun 2021 17:31:25 +0000 Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) In-Reply-To: References: Message-ID: Hey Alex, * For question 1 * your understanding of the "host-update" use case is correct i.e. * power on host which dhcp's on mgmt. network * host gets auto discovered by controller-0 and auto-provisioned without personality * user uses host-update to set personality * controller-0 installs software for that personality * "host-add" use case is sort of the opposite ... configure host first, then power it on i.e. * user uses "host-add" command and configures host in system's inventory with identifying information such as BMC IP Address, mgmt. network MAC, etc., and the host's personality * user uses "host-power-on" command to power on the host via the BMC * host powers on, dhcp's on mgmt. network * gets recognized by controller-0 from previously configured host info (e.g. mgmt. MAC, ...) * controller-0 installs software for the previously configured personality of this host. * For question 2 * pretty sure answer is no * I believe starlingx sysinv/mtce/swmgmt software will always want to install software on a new host * * ... although, thinking of question 1, you could try doing a host-add with the identifying information of controller-1, and power on controller-1 and see if controller-0 will try to re-install or not * no matter what there will be software versioning checks that happen at boot time to ensure controller-1 is running the same software as controller-0 and is patch current based on controller-0's applied patches * * ... however, why do you want to do this ? * like is this a real use case ? Greg. From: Williams, Alexander Sent: Monday, June 7, 2021 12:23 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, My current understanding is that whenever adding a host (esp. controller-1) using the system host-update command after a PXE boot, StarlingX will install the base image and then perform the configuration steps to make it a controller, worker, etc., overwriting anything that was previously installed on the machine. 1. Is my understanding of host-update correct, or am I missing something important here? Does host-add do the same thing, but gets run on controller-0 before booting and not after? 2. If I install the StarlingX image on a server that will become controller-1, is there any way to add it to the host list of controller-0 and configure its personality without the server reinstalling StarlingX? Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu Jun 10 17:54:41 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 10 Jun 2021 17:54:41 +0000 Subject: [Starlingx-discuss] TSC Call (June 9, 2021) Message-ID: >From yesterday's TSC Call, * This is a REMINDER for all PROJECT LEADS to review the proposed StarlingX R6 Feature List: * https://docs.google.com/spreadsheets/d/13p0BMlBgJXUVForOFsblAJq9jA1-FMBlmhV5TIc70IE/edit#gid=1107209846 * And let me know if * features assigned to you will NOT be resource-able for StarlingX R6 * you can update the 'final disposition' if required to prep or In or partial * or * features are MISSING from this list that you plan to contribute to StarlingX R6 I would like to close on this list in the next few TSC Meetings, Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Jun 10 18:14:46 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 10 Jun 2021 14:14:46 -0400 Subject: [Starlingx-discuss] Stop recording the meetings Message-ID: Hi, I’m reaching out to you about the StarlingX meeting recordings. If you take a look at the meeting wiki[1] you will see that the most recent links to recordings are over a year old. During this time I haven’t received any requests or complaints until very recently. But this recent outreach was also about to check on the recordings in general just to understand if the meetings are still happening or not and not to listen back on either of them. Following the mailing list you can also see that most teams are posting their meeting logs that are usually on their meeting etherpads which gives everyone a chance to catch up on what was discussed and is a primary way to keep meeting history. Based on the above I would like to propose to stop recording the meetings. Please respond to this thread by the end of next week (June 20) if you have any questions or concerns to take into account before taking action. Thanks and Best Regards, Ildikó [1] https://wiki.openstack.org/wiki/Starlingx/Meetings From Alexander.Williams at commscope.com Thu Jun 10 19:07:07 2021 From: Alexander.Williams at commscope.com (Williams, Alexander) Date: Thu, 10 Jun 2021 19:07:07 +0000 Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) In-Reply-To: References: Message-ID: Hi Greg, Thanks for your response! I'll be giving the host-add a shot. The reason I asked is because pre-installing the image potentially provides a speedup to deployment times. Installing the images beforehand would cut down on the downtime waiting for the second controller to be provisioned without personality and on total time assuming that the images for both controllers are installed simultaneously. Best, Alex From: Waines, Greg Sent: Thursday, June 10, 2021 1:31 PM To: Williams, Alexander ; starlingx-discuss at lists.starlingx.io Subject: RE: Adding hosts (Bare Metal AIO Duplex) Hey Alex, For question 1 your understanding of the "host-update" use case is correct i.e. power on host which dhcp's on mgmt. network host gets auto discovered by controller-0 and auto-provisioned wit Hey Alex, * For question 1 * your understanding of the "host-update" use case is correct i.e. * power on host which dhcp's on mgmt. network * host gets auto discovered by controller-0 and auto-provisioned without personality * user uses host-update to set personality * controller-0 installs software for that personality * "host-add" use case is sort of the opposite ... configure host first, then power it on i.e. * user uses "host-add" command and configures host in system's inventory with identifying information such as BMC IP Address, mgmt. network MAC, etc., and the host's personality * user uses "host-power-on" command to power on the host via the BMC * host powers on, dhcp's on mgmt. network * gets recognized by controller-0 from previously configured host info (e.g. mgmt. MAC, ...) * controller-0 installs software for the previously configured personality of this host. * For question 2 * pretty sure answer is no * I believe starlingx sysinv/mtce/swmgmt software will always want to install software on a new host * * ... although, thinking of question 1, you could try doing a host-add with the identifying information of controller-1, and power on controller-1 and see if controller-0 will try to re-install or not * no matter what there will be software versioning checks that happen at boot time to ensure controller-1 is running the same software as controller-0 and is patch current based on controller-0's applied patches * * ... however, why do you want to do this ? * like is this a real use case ? Greg. From: Williams, Alexander > Sent: Monday, June 7, 2021 12:23 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, My current understanding is that whenever adding a host (esp. controller-1) using the system host-update command after a PXE boot, StarlingX will install the base image and then perform the configuration steps to make it a controller, worker, etc., overwriting anything that was previously installed on the machine. 1. Is my understanding of host-update correct, or am I missing something important here? Does host-add do the same thing, but gets run on controller-0 before booting and not after? 2. If I install the StarlingX image on a server that will become controller-1, is there any way to add it to the host list of controller-0 and configure its personality without the server reinstalling StarlingX? Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Thu Jun 10 19:24:56 2021 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Thu, 10 Jun 2021 19:24:56 +0000 Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) In-Reply-To: References: Message-ID: Alex, I don't think you will be able to pre-install controller-1 and then add it to the system. When the first controller is installed, a unique UUID is generated. That UUID is then copied on to each host in the system as it is installed. I'm pretty sure that if you were to pre-install a host (e.g. from an ISO), the UUID will not match and when it boots it will fail to initialize (there will be a configuration failure and the services won't come up). Bart From: Williams, Alexander Sent: Thursday, June 10, 2021 3:07 PM To: Waines, Greg ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Greg, Thanks for your response! I'll be giving the host-add a shot. The reason I asked is because pre-installing the image potentially provides a speedup to deployment times. Installing the images beforehand would cut down on the downtime waiting for the second controller to be provisioned without personality and on total time assuming that the images for both controllers are installed simultaneously. Best, Alex From: Waines, Greg > Sent: Thursday, June 10, 2021 1:31 PM To: Williams, Alexander >; starlingx-discuss at lists.starlingx.io Subject: RE: Adding hosts (Bare Metal AIO Duplex) Hey Alex, For question 1 your understanding of the "host-update" use case is correct i.e. power on host which dhcp's on mgmt. network host gets auto discovered by controller-0 and auto-provisioned wit Hey Alex, * For question 1 * your understanding of the "host-update" use case is correct i.e. * power on host which dhcp's on mgmt. network * host gets auto discovered by controller-0 and auto-provisioned without personality * user uses host-update to set personality * controller-0 installs software for that personality * "host-add" use case is sort of the opposite ... configure host first, then power it on i.e. * user uses "host-add" command and configures host in system's inventory with identifying information such as BMC IP Address, mgmt. network MAC, etc., and the host's personality * user uses "host-power-on" command to power on the host via the BMC * host powers on, dhcp's on mgmt. network * gets recognized by controller-0 from previously configured host info (e.g. mgmt. MAC, ...) * controller-0 installs software for the previously configured personality of this host. * For question 2 * pretty sure answer is no * I believe starlingx sysinv/mtce/swmgmt software will always want to install software on a new host * * ... although, thinking of question 1, you could try doing a host-add with the identifying information of controller-1, and power on controller-1 and see if controller-0 will try to re-install or not * no matter what there will be software versioning checks that happen at boot time to ensure controller-1 is running the same software as controller-0 and is patch current based on controller-0's applied patches * * ... however, why do you want to do this ? * like is this a real use case ? Greg. From: Williams, Alexander > Sent: Monday, June 7, 2021 12:23 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, My current understanding is that whenever adding a host (esp. controller-1) using the system host-update command after a PXE boot, StarlingX will install the base image and then perform the configuration steps to make it a controller, worker, etc., overwriting anything that was previously installed on the machine. 1. Is my understanding of host-update correct, or am I missing something important here? Does host-add do the same thing, but gets run on controller-0 before booting and not after? 2. If I install the StarlingX image on a server that will become controller-1, is there any way to add it to the host list of controller-0 and configure its personality without the server reinstalling StarlingX? Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Fri Jun 11 11:29:50 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Fri, 11 Jun 2021 11:29:50 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210611T013347Z Message-ID: Sanity Test from 2021-June-11 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210611T013347Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL STANDARD Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210611T013347Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Fri Jun 11 11:34:50 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Fri, 11 Jun 2021 11:34:50 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210611T013347Z Message-ID: Sanity Test from 2021-June-11 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210611T013347Z/outputs/iso/ ) Status: GREEN Executed on BARE METAL DUPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210611T013347Z/outputs/helm-charts/ =========================================== Sanity Test executed on Bare Metal AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Sun Jun 13 18:11:37 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Sun, 13 Jun 2021 18:11:37 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210611T013347Z Message-ID: Sanity Test from 2021-June-11 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210611T013347Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL SIMPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210611T013347Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Jun 14 14:09:25 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 14 Jun 2021 10:09:25 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images_layered - Build # 142 - Failure! Message-ID: <1323762000.128.1623679769375.JavaMail.javamailuser@localhost> Project: STX_build_docker_images_layered Build #: 142 Status: Failure Timestamp: 20210611T024609Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210611T023023Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20210611T023023Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210611T023023Z/logs MASTER_BUILD_NUMBER: 147 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20210611T023023Z/logs MASTER_JOB_NAME: STX_build_layer_containers_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers PUBLISH_TIMESTAMP: 20210611T023023Z DOCKER_BUILD_ID: jenkins-master-containers-20210611T023023Z-builder TIMESTAMP: 20210611T023023Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210611T023023Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20210611T023023Z/outputs From build.starlingx at gmail.com Mon Jun 14 14:09:32 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 14 Jun 2021 10:09:32 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 147 - Failure! Message-ID: <143120662.131.1623679772865.JavaMail.javamailuser@localhost> Project: STX_build_layer_containers_master_master Build #: 147 Status: Failure Timestamp: 20210611T023023Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210611T023023Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false From Davlet.Panech at windriver.com Mon Jun 14 14:38:14 2021 From: Davlet.Panech at windriver.com (Panech, Davlet) Date: Mon, 14 Jun 2021 14:38:14 +0000 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 147 - Failure! In-Reply-To: <143120662.131.1623679772865.JavaMail.javamailuser@localhost> References: <143120662.131.1623679772865.JavaMail.javamailuser@localhost> Message-ID: Build filed due to an intermittent network issue with accessing Git repositories at opendev.org. I restarted the build. ________________________________ From: build.starlingx at gmail.com Sent: June 14, 2021 10:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 147 - Failure! [Please note: This e-mail is from an EXTERNAL e-mail address] Project: STX_build_layer_containers_master_master Build #: 147 Status: Failure Timestamp: 20210611T023023Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210611T023023Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michel.Thebeau at windriver.com Mon Jun 14 14:50:02 2021 From: Michel.Thebeau at windriver.com (Thebeau, Michel) Date: Mon, 14 Jun 2021 14:50:02 +0000 Subject: [Starlingx-discuss] CPU-isolation support In-Reply-To: References: Message-ID: Hi Ankush, Does this document help? https://docs.starlingx.io/admintasks/isolating-cpu-cores-to-enhance-application-performance.html (also parent document here: https://docs.starlingx.io/admintasks/index.html#application-management) M ________________________________ From: Rai, Ankush Sent: 09 June 2021 12:04 PM To: starlingx-discuss at lists.starlingx.io Cc: Sambandan, Devaraj ; Dharwadkar, Sriram Subject: [Starlingx-discuss] CPU-isolation support [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, Starlingx 4.0 does not support CPU-isolation. Does this support enabled in starlingx 5.0 or 6.0? Please let us know if any work-around available for this issue in 4.0? Regards, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From sxmatch1986 at gmail.com Tue Jun 15 02:51:27 2021 From: sxmatch1986 at gmail.com (hao wang) Date: Tue, 15 Jun 2021 10:51:27 +0800 Subject: [Starlingx-discuss] [TSC election]Do not extend term of TSC seat Message-ID: Hi, everyone, About the TSC seat, I'm very sorry that I won't extend the term because of the changes in my work content and I'm afirad that the limited enery couldn't allow me to do my tsc's duty well. I'm very pleased and honored that I could work with this great team for two years, you opened my mind to learn so much about open source and edge computing and I also want to thank you all so much for helping me to work with this project. I will still keep my eyes on edge computing tech and looking forward to the future we can work together again. Thank you. From alexandru.dimofte at intel.com Tue Jun 15 07:25:56 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 15 Jun 2021 07:25:56 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210614T141418Z Message-ID: Sanity Test from 2021-June-14 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210614T141418Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210614T141418Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 89 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From alexandru.dimofte at intel.com Tue Jun 15 07:49:21 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 15 Jun 2021 07:49:21 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210614T235049Z Message-ID: Sanity Test from 2021-June-14 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210614T235049Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210614T235049Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 90 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From openinfradn at gmail.com Tue Jun 15 12:59:23 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 15 Jun 2021 18:29:23 +0530 Subject: [Starlingx-discuss] Error creating VMs Message-ID: Hi, I have deployed StarlingX R 5, with Standard Dedicated Storage. Noticed that VM creation fail without much information. But I managed to created VMs in STX R5 Simplex AIO. I am not sure if this is due to misconfiguration of networks or worker-0 (currently only one worker node is available in standard deployment). STX Alam " underlying-resource-unavailable" http://paste.openstack.org/show/806626/ I highly appreciate if someone can guide to dig further (what logs to check ) or to fix this issue. Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alexander.Williams at commscope.com Tue Jun 15 16:42:02 2021 From: Alexander.Williams at commscope.com (Williams, Alexander) Date: Tue, 15 Jun 2021 16:42:02 +0000 Subject: [Starlingx-discuss] Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) Message-ID: Hi all, I'm in the process of installing StarlingX 4.0.1 as a Bare Metal AIO Duplex for a Central Cloud, and so far have been unable to progress past the ansible bootstrap step, where it fails at the step "Wait for service endpoints reconfiguration to complete". I've attached the ansible.log from my most recent attempt to help debugging. If it would help, I can also send the localhost.yml file I've been using and the sysinv.log. Thank you in advance for any help you can offer. Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ansible.log Type: application/octet-stream Size: 82457 bytes Desc: ansible.log URL: From Al.Bailey at windriver.com Tue Jun 15 17:32:50 2021 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Tue, 15 Jun 2021 17:32:50 +0000 Subject: [Starlingx-discuss] Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) In-Reply-To: References: Message-ID: I don't know if it helps, but that timeout had to be increased to 720 because it took a long time on certain hardware. See: https://github.com/starlingx/ansible-playbooks/commit/0a1c06a66bc286b306bfdf4ada7cf823787b7a94 You may be able to increase the value even more. The logs under /var/log/puppet should indicate if there was a failure, and if not, may indicate how long it took. Al From: Williams, Alexander Sent: Tuesday, June 15, 2021 12:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, I'm in the process of installing StarlingX 4.0.1 as a Bare Metal AIO Duplex for a Central Cloud, and so far have been unable to progress past the ansible bootstrap step, where it fails at the step "Wait for service endpoints reconfiguration to complete". I've attached the ansible.log from my most recent attempt to help debugging. If it would help, I can also send the localhost.yml file I've been using and the sysinv.log. Thank you in advance for any help you can offer. Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From Linda.Wang at windriver.com Mon Jun 14 20:44:06 2021 From: Linda.Wang at windriver.com (Linda Wang) Date: Mon, 14 Jun 2021 13:44:06 -0700 Subject: [Starlingx-discuss] Bi-Weekly StarlingX OS Distro & Multi-OS Meeting Minutes: June 9, 2021 Message-ID: <556cad58-7f74-4647-7d69-961677b2f4c0@windriver.com> 06/09/2021 Attendees: Charles Short, Davlet Paench, Mark Asselstine, Scott Little, Frank Miller, Bill Zvonar, Bart Wensley, Steve Geary, Ramaswamy S. 1. OS Distro (Mark) * Python3 Status (Frank) o Chuck is busy with really big rebase.  Once that is completed, then will have more people to help with python3 conversion work. * Debian OS Transition Status (Mark) o 5.10 Kernel: + Greg has provided +1 on kernel specification + JiPing send out her first review request on 5.10 kernel o Mark has been busy on a spec on outline technology on outflow containers. + 1 of them is repository manager (probably goingto be used Pulp. but not supporting  source packages. so workedo n that this week) + Talk to Debian maintainer on it, seems to be acceptable. * Team also send out first patch on STX tool for review. Also push code on python tools (on controller side) o Also using minikube to fire up the container.  Hopefully will have these ready by the next meeting. * No plan to change the hardware specs. 2. Multi-OS (Jackie) * For Yocto project need to support rpm pkgs, and Pulp supports both rpm and Deiban source pkgs. * The new proposal aligns with Jackie, and the team.  Therefore, continue moving forward with the proposal, and continue discussion with Jackie, and Gil. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ankush.Rai at commscope.com Tue Jun 15 18:13:55 2021 From: Ankush.Rai at commscope.com (Rai, Ankush) Date: Tue, 15 Jun 2021 18:13:55 +0000 Subject: [Starlingx-discuss] PXE-boot error with 5.0 build Message-ID: Hi, We are trying to bring-up Starlingx in ALL-IN-ONE DUPLEX mode and using the 5.0 release-build. The first node is coming up properly but the second node is failed in PXE boot. Note: We have created OAM and MGMT VLAN over same interface and enable the pxeboot on the interface. This networking was working fine with starlingx:4.0 and we are seeing the issue with 5.0 only The screen-shot is attached for your reference. Thanks, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PXE_err_2.png Type: image/png Size: 18484 bytes Desc: PXE_err_2.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PXE_err_2.png Type: image/png Size: 18484 bytes Desc: PXE_err_2.png URL: From Ankush.Rai at commscope.com Tue Jun 15 19:53:54 2021 From: Ankush.Rai at commscope.com (Rai, Ankush) Date: Tue, 15 Jun 2021 19:53:54 +0000 Subject: [Starlingx-discuss] Starlings 4.0 to 5.0 upgrade Message-ID: Is 4.0 to 5.0 upgrade supported? Thanks, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alexander.Williams at commscope.com Wed Jun 16 01:24:28 2021 From: Alexander.Williams at commscope.com (Williams, Alexander) Date: Wed, 16 Jun 2021 01:24:28 +0000 Subject: [Starlingx-discuss] Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) In-Reply-To: References: Message-ID: Hi Bailey, Thanks for the help - I've been able to increase the timeout time, but after setting it to timeout after ~5 hours it has not made any progress. Is there anything in particular in the puppet logs that I should be looking for? Thanks again! Best, Alex From: Bailey, Henry Albert (Al) Sent: Tuesday, June 15, 2021 1:33 PM To: Williams, Alexander ; starlingx-discuss at lists.starlingx.io Subject: RE: Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) I don't know if it helps, but that timeout had to be increased to 720 because it took a long time on certain hardware. See: https://github.com/starlingx/ansible-playbooks/commit/0a1c06a66bc286b306bfdf External (al.bailey at windriver.com) Report This Email FAQ Protection by INKY I don't know if it helps, but that timeout had to be increased to 720 because it took a long time on certain hardware. See: https://github.com/starlingx/ansible-playbooks/commit/0a1c06a66bc286b306bfdf4ada7cf823787b7a94 You may be able to increase the value even more. The logs under /var/log/puppet should indicate if there was a failure, and if not, may indicate how long it took. Al From: Williams, Alexander > Sent: Tuesday, June 15, 2021 12:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, I'm in the process of installing StarlingX 4.0.1 as a Bare Metal AIO Duplex for a Central Cloud, and so far have been unable to progress past the ansible bootstrap step, where it fails at the step "Wait for service endpoints reconfiguration to complete". I've attached the ansible.log from my most recent attempt to help debugging. If it would help, I can also send the localhost.yml file I've been using and the sysinv.log. Thank you in advance for any help you can offer. Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Jun 16 04:40:13 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 16 Jun 2021 00:40:13 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1276 - Failure! Message-ID: <1495671829.136.1623818414834.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1276 Status: Failure Timestamp: 20210616T043307Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210616T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210616T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210616T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210616T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed Jun 16 04:40:16 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 16 Jun 2021 00:40:16 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 952 - Failure! Message-ID: <1848894328.139.1623818417017.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 952 Status: Failure Timestamp: 20210616T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210616T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From alexandru.dimofte at intel.com Wed Jun 16 07:38:17 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 16 Jun 2021 07:38:17 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210616T015544Z Message-ID: Sanity Test from 2021-June-16 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210616T015544Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210616T015544Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 71 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 83 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 8413 bytes Desc: image001.png URL: From Barton.Wensley at windriver.com Wed Jun 16 12:04:29 2021 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Wed, 16 Jun 2021 12:04:29 +0000 Subject: [Starlingx-discuss] Starlings 4.0 to 5.0 upgrade In-Reply-To: References: Message-ID: Ankush, Upgrades between major versions are supported by the infrastructure described in the following specification: https://docs.starlingx.io/specs/specs/stx-4.0/approved/starlingx-2007403-platform-upgrades.html However, the starlingx community does not test or support upgrades (this work is done by commercial products that build on top of starlingx). To do an upgrade, you would need to do your own upgrade testing and fix any issues you uncover. Upgrades often require additional changes on the "from" release side, so you would likely need to build your own patches as well. Bart From: Rai, Ankush Sent: Tuesday, June 15, 2021 3:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Starlings 4.0 to 5.0 upgrade [Please note: This e-mail is from an EXTERNAL e-mail address] Is 4.0 to 5.0 upgrade supported? Thanks, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jun 16 12:51:59 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 16 Jun 2021 12:51:59 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 16, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210616T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Bill.Zvonar at windriver.com Wed Jun 16 14:29:03 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 16 Jun 2021 14:29:03 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 16, 2021) In-Reply-To: References: Message-ID: >From this week's call... * Standing Topics * Build/Sanity * sanities all green, no build issues other than some related to intermittent connectivity blips * Gerrit Reviews in Need of Attention * https://review.opendev.org/c/starlingx/utilities/+/793256 zuul job for bandit * Topics for this Week * nothing this week * ARs from Previous Meetings * nothing this week * Open Requests for Help * CPU-isolation support * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011592.html * Mitch did respond to this already * Openstack Application Upgrade * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011578.html * Austin to respond * Error creating VMs * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011596.html * Austin to respond re: guidance on checking Nova logs * PXE-boot error with 5.0 build * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011600.html * Bill to check with PXE-boot savvy folks * Build Matters (if required) * nothing this week -----Original Message----- From: Zvonar, Bill Sent: Wednesday, June 16, 2021 8:52 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community (& TSC) Call (June 16, 2021) Hi all, reminder of the weekly TSC/Community coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210616T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From austin.sun at intel.com Wed Jun 16 14:33:26 2021 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 16 Jun 2021 14:33:26 +0000 Subject: [Starlingx-discuss] Error creating VMs In-Reply-To: References: Message-ID: Hi Danishka: Please check openstack logs which are under /var/log/pods. You might check worker nodes logs. Thanks. BR Austin Sun. From: open infra Sent: Tuesday, June 15, 2021 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Error creating VMs Hi, I have deployed StarlingX R 5, with Standard Dedicated Storage. Noticed that VM creation fail without much information. But I managed to created VMs in STX R5 Simplex AIO. I am not sure if this is due to misconfiguration of networks or worker-0 (currently only one worker node is available in standard deployment). STX Alam " underlying-resource-unavailable" http://paste.openstack.org/show/806626/ I highly appreciate if someone can guide to dig further (what logs to check ) or to fix this issue. Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Wed Jun 16 14:44:53 2021 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 16 Jun 2021 14:44:53 +0000 Subject: [Starlingx-discuss] PXE-boot error with 5.0 build In-Reply-To: References: Message-ID: The error in your screenshot indicates trouble retrieving the initrd. I would check the dnsmasq logs in daemon.log on the active controller. You'll see the DHCP request and response there, and should see tftp requests for the pxelinux.cfg file, followed by the bzImage (kernel) and initrd. In my test, the initrd is the last of the three files, so it's surprising there'd be an issue on that one and not the earlier files. Maybe the logs will shed some light for you. Example from my system: [sysadmin at controller-0 ~(keystone_admin)]$ tail -f /var/log/daemon.log | grep dnsmasq 2021-06-16T14:40:21.000 controller-0 dnsmasq-dhcp[101586]: info DHCPDISCOVER(enp0s8) 08:00:27:2a:b1:e7 2021-06-16T14:40:21.000 controller-0 dnsmasq-dhcp[101586]: info DHCPOFFER(enp0s8) 192.168.204.3 08:00:27:2a:b1:e7 2021-06-16T14:40:23.000 controller-0 dnsmasq-dhcp[101586]: info DHCPREQUEST(enp0s8) 192.168.204.3 08:00:27:2a:b1:e7 2021-06-16T14:40:23.000 controller-0 dnsmasq-dhcp[101586]: info DHCPACK(enp0s8) 192.168.204.3 08:00:27:2a:b1:e7 controller-1 2021-06-16T14:40:23.000 controller-0 dnsmasq-tftp[101586]: err error 0 TFTP Aborted received from 192.168.204.3 2021-06-16T14:40:23.000 controller-0 dnsmasq-tftp[101586]: info failed sending /pxeboot/pxelinux.0 to 192.168.204.3 2021-06-16T14:40:23.000 controller-0 dnsmasq-tftp[101586]: info sent /pxeboot/pxelinux.0 to 192.168.204.3 2021-06-16T14:40:23.000 controller-0 dnsmasq-tftp[101586]: err file /pxeboot/pxelinux.cfg/ecf77d23-7792-4e4f-95bc-46a4d0ca2b36 not found 2021-06-16T14:40:23.000 controller-0 dnsmasq-tftp[101586]: info sent /pxeboot/pxelinux.cfg/01-08-00-27-2a-b1-e7 to 192.168.204.3 2021-06-16T14:40:23.000 controller-0 dnsmasq-tftp[101586]: info sent /pxeboot/menu.c32 to 192.168.204.3 2021-06-16T14:40:23.000 controller-0 dnsmasq-tftp[101586]: info sent /pxeboot/pxelinux.cfg/01-08-00-27-2a-b1-e7 to 192.168.204.3 2021-06-16T14:40:25.000 controller-0 dnsmasq-script[101586]: debug sysinv 2021-06-16 14:40:25.373 2363971 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'add' for mac '08:00:27:2a:b1:e7' with ip '192.168.204.3' 2021-06-16T14:40:25.000 controller-0 dnsmasq-script[101586]: debug sysinv 2021-06-16 14:40:25.565 2363971 INFO sysinv.openstack.common.rpc.common [-] Connected to AMQP server on 192.168.204.1:5672 2021-06-16T14:40:30.000 controller-0 dnsmasq-tftp[101586]: info sent /pxeboot/rel-21.05/installer-bzImage to 192.168.204.3 2021-06-16T14:40:39.000 controller-0 dnsmasq-tftp[101586]: info sent /pxeboot/rel-21.05/installer-initrd to 192.168.204.3 2021-06-16T14:40:57.000 controller-0 dnsmasq-dhcp[101586]: info DHCPDISCOVER(enp0s8) 08:00:27:2a:b1:e7 2021-06-16T14:40:57.000 controller-0 dnsmasq-dhcp[101586]: info DHCPOFFER(enp0s8) 192.168.204.3 08:00:27:2a:b1:e7 2021-06-16T14:40:57.000 controller-0 dnsmasq-dhcp[101586]: info DHCPREQUEST(enp0s8) 192.168.204.3 08:00:27:2a:b1:e7 2021-06-16T14:40:57.000 controller-0 dnsmasq-dhcp[101586]: info DHCPACK(enp0s8) 192.168.204.3 08:00:27:2a:b1:e7 controller-1 From: Rai, Ankush Sent: Tuesday, June 15, 2021 2:14 PM To: starlingx-discuss at lists.starlingx.io Cc: Sambandan, Devaraj ; Dharwadkar, Sriram Subject: [Starlingx-discuss] PXE-boot error with 5.0 build [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, We are trying to bring-up Starlingx in ALL-IN-ONE DUPLEX mode and using the 5.0 release-build. The first node is coming up properly but the second node is failed in PXE boot. Note: We have created OAM and MGMT VLAN over same interface and enable the pxeboot on the interface. This networking was working fine with starlingx:4.0 and we are seeing the issue with 5.0 only The screen-shot is attached for your reference. Thanks, Ankush -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Wed Jun 16 14:58:19 2021 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 16 Jun 2021 14:58:19 +0000 Subject: [Starlingx-discuss] Openstack Application Upgrade In-Reply-To: References: Message-ID: Hi Anirudh, We upgraded OpenStack version to ussuri in R4 and have no upgrade plan so far. However, you can upgrade it to the victoria or another version release after ussuri by yourself or with the help from our community. Thanks! Zhipeng From: Anirudh Gupta Sent: 2021年6月10日 19:09 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Openstack Application Upgrade Hi Team, We had deployed StarlingX 4.0 in Bare metal Standard with Controller Storage Mode. We have also installed Openstack application from StarlingX cengn mirror stx-openstack-1.0-49-centos-stable-versioned. On the top of Openstack, few workloads are also running. As per the link below, https://docs.starlingx.io/updates/kubernetes/upgrading-all-in-one-duplex-or-standard.html#upgrading-all-in-one-duplex-or-standard There is an option to upgrade starlingx software release from 4.0 to 5.0 without hampering the existing configuration and in a seperate link, openstack patch upgrade is also possible https://docs.starlingx.io/updates/openstack/apply-update-to-the-stx-openstack-application.html Is there any option to upgrade openstack release to the victoria or another version release after ussuri, once we do upgrade from 4.0 to 5.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Wed Jun 16 15:26:49 2021 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Wed, 16 Jun 2021 15:26:49 +0000 Subject: [Starlingx-discuss] Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) In-Reply-To: References: Message-ID: This appears to be the puppet class responsible for that flag file that needs to be created https://github.com/starlingx/stx-puppet/blob/r/stx.4.0/puppet-manifests/src/modules/openstack/manifests/keystone.pp#L388 Check for Error or Warning entries under the various logs under the /var/log/puppet folder Al From: Williams, Alexander Sent: Tuesday, June 15, 2021 9:24 PM To: Bailey, Henry Albert (Al) ; starlingx-discuss at lists.starlingx.io Subject: RE: Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Bailey, Thanks for the help - I've been able to increase the timeout time, but after setting it to timeout after ~5 hours it has not made any progress. Is there anything in particular in the puppet logs that I should be looking for? Thanks again! Best, Alex From: Bailey, Henry Albert (Al) > Sent: Tuesday, June 15, 2021 1:33 PM To: Williams, Alexander >; starlingx-discuss at lists.starlingx.io Subject: RE: Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) I don't know if it helps, but that timeout had to be increased to 720 because it took a long time on certain hardware. See: https://github.com/starlingx/ansible-playbooks/commit/0a1c06a66bc286b306bfdf4ada7cf823787b7a94 You may be able to increase the value even more. The logs under /var/log/puppet should indicate if there was a failure, and if not, may indicate how long it took. Al From: Williams, Alexander > Sent: Tuesday, June 15, 2021 12:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Installation Problem: Configuration failed (Timeout at service endpoints reconfiguration) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, I'm in the process of installing StarlingX 4.0.1 as a Bare Metal AIO Duplex for a Central Cloud, and so far have been unable to progress past the ansible bootstrap step, where it fails at the step "Wait for service endpoints reconfiguration to complete". I've attached the ansible.log from my most recent attempt to help debugging. If it would help, I can also send the localhost.yml file I've been using and the sysinv.log. Thank you in advance for any help you can offer. Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From maryx.camp at intel.com Wed Jun 16 20:49:29 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 16 Jun 2021 20:49:29 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 16-Jun-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 16-Jun-21 All -- reviews merged since last meeting: 18 Status/questions/opens R5 retrospective input. Mary will send email to the discuss list about gathering docs-specific retrospective feedback (what we did, how to improve, etc) to be discussed next week's meeting. Jira & LP gap discussion from last week - Greg replied in the discussion list: http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011579.html No easy solution, still TBD. We will work on this with the other teams. Ildiko's proposal to stop recording these meetings - http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011583.html We have no issues with this. Mary checked one of the old recordings out of curiosity, it captures voice and screen sharing. StarlingX Release 6.0 planning New feature may be coming sooner in master branch. Juanita will look into it and coordinate with Ghada. Cherry pick meeting - we need to figure this out as soon as possible. The WR downstream is built on master/latest instead of the R5 branch. There will be homework before the meeting. Ron will do the info gathering - compare the git commits (in repo and logs) against merged reviews. Before the meeting, we all need to evaluate our own reviews to see if they should be cherry picked or just redone in the r5 branch. Ron will pick a meeting day by the end of the week. Shooting for early next week. Juanita will set up Zoom when we know the best day. Mary will send Juanita her availability. From build.starlingx at gmail.com Thu Jun 17 04:41:02 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 17 Jun 2021 00:41:02 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1277 - Still Failing! In-Reply-To: <1688900229.134.1623818410735.JavaMail.javamailuser@localhost> References: <1688900229.134.1623818410735.JavaMail.javamailuser@localhost> Message-ID: <1299287810.142.1623904864722.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1277 Status: Still Failing Timestamp: 20210617T043315Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210617T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210617T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210617T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210617T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Thu Jun 17 04:41:06 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 17 Jun 2021 00:41:06 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 953 - Still Failing! In-Reply-To: <1918671137.137.1623818415405.JavaMail.javamailuser@localhost> References: <1918671137.137.1623818415405.JavaMail.javamailuser@localhost> Message-ID: <1423547860.145.1623904867018.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 953 Status: Still Failing Timestamp: 20210617T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210617T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From nicolae.jascanu at intel.com Thu Jun 17 17:13:05 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Thu, 17 Jun 2021 17:13:05 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210617T013403Z Message-ID: Sanity Test from 2021-June-17 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210617T013403Z/outputs/iso/ ) Status: GREEN Executed on BARE METAL DUPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210617T013403Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Jun 17 18:19:34 2021 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 17 Jun 2021 18:19:34 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - June 16/2021 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release Team Meeting - Jun 16 2021 stx.5.0 - stx.5.0 Docs - stx.5.0 doc cherrypicks have not all been done yet. Forecast: end of June - Will discuss re-tagging at that time stx.6.0 - Release Planning Spreadsheet: https://docs.google.com/spreadsheets/d/13p0BMlBgJXUVForOFsblAJq9jA1-FMBlmhV5TIc70IE/edit#gid=1107209846 - PLs took the action to fill in the feature dates as much as possible for the next meeting in two weeks From build.starlingx at gmail.com Fri Jun 18 04:40:49 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 18 Jun 2021 00:40:49 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1278 - Still Failing! In-Reply-To: <1491717042.140.1623904860416.JavaMail.javamailuser@localhost> References: <1491717042.140.1623904860416.JavaMail.javamailuser@localhost> Message-ID: <775605512.148.1623991249774.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1278 Status: Still Failing Timestamp: 20210618T043331Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210618T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210618T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210618T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210618T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Fri Jun 18 04:40:51 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 18 Jun 2021 00:40:51 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 954 - Still Failing! In-Reply-To: <1153289077.143.1623904865257.JavaMail.javamailuser@localhost> References: <1153289077.143.1623904865257.JavaMail.javamailuser@localhost> Message-ID: <2134540643.151.1623991251747.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 954 Status: Still Failing Timestamp: 20210618T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210618T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From openinfradn at gmail.com Fri Jun 18 05:14:35 2021 From: openinfradn at gmail.com (open infra) Date: Fri, 18 Jun 2021 10:44:35 +0530 Subject: [Starlingx-discuss] Error creating VMs In-Reply-To: References: Message-ID: Thanks Austin. Seems the issue is connectivity to rabbitmq though nothing altered manually. 2021-06-18T04:37:58.675414124Z stdout F 2021-06-18 04:37:58.672 1 INFO nova.scheduler.manager [req-25ec038c-661d-41fa-ac07-57a2eeb2fd6c d9f7048c1cd947cfa8ecef128a6cee89 e8813293073545f99658adbec2f80c1d - default default] Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up. 2021-06-18T04:37:58.67589708Z stdout F 2021-06-18 04:37:58.672 1 INFO nova.scheduler.manager [req-25ec038c-661d-41fa-ac07-57a2eeb2fd6c d9f7048c1cd947cfa8ecef128a6cee89 e8813293073545f99658adbec2f80c1d - default default] Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up. 2021-06-18T04:37:59.291144415Z stdout F 2021-06-18 04:37:59.289 1 WARNING nova.scheduler.utils [req-25ec038c-661d-41fa-ac07-57a2eeb2fd6c d9f7048c1cd947cfa8ecef128a6cee89 e8813293073545f99658adbec2f80c1d - default default] [instance: 3dbd2298-acf5-4123-a6e5-608113b0dbb7] Setting instance to ERROR state.: nova.exception_Remote.NoValidHost_Remote: No valid host was found. 2021-06-18T04:37:59.533353963Z stdout F 2021-06-18 04:37:59.532 1 ERROR oslo.messaging._drivers.impl_rabbit [req-25ec038c-661d-41fa-ac07-57a2eeb2fd6c d9f7048c1cd947cfa8ecef128a6cee89 e8813293073545f99658adbec2f80c1d - default default] Connection failed: failed to resolve broker hostname (retrying in 0 seconds): OSError: failed to resolve broker hostname controller-0:/var/log/pods$ sudo rabbitmqctl cluster_status Password: Cluster status of node rabbit at localhost ... [{nodes,[{disc,[rabbit at localhost]}]}, {running_nodes,[rabbit at localhost]}, {cluster_name,<<"rabbit at controller-0">>}, {partitions,[]}, {alarms,[{rabbit at localhost,[]}]}] On Wed, Jun 16, 2021 at 8:03 PM Sun, Austin wrote: > Hi Danishka: > > Please check openstack logs which are under /var/log/pods. You might > check worker nodes logs. > > > > Thanks. > > BR > Austin Sun. > > > > *From:* open infra > *Sent:* Tuesday, June 15, 2021 8:59 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Error creating VMs > > > > Hi, > > > > I have deployed StarlingX R 5, with Standard Dedicated Storage. Noticed > that VM creation fail without much information. But I managed to created > VMs in STX R5 Simplex AIO. > > > > I am not sure if this is due to misconfiguration of networks or worker-0 > (currently only one worker node is available in standard deployment). STX > Alam " underlying-resource-unavailable" > > > > http://paste.openstack.org/show/806626/ > > > > I highly appreciate if someone can guide to dig further (what logs to > check ) or to fix this issue. > > > > Regards, > > Danishka > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Fri Jun 18 14:21:16 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Fri, 18 Jun 2021 14:21:16 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210618T013341Z Message-ID: Sanity Test from 2021-June-18 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210618T013341Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL STANDARD Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210618T013341Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sat Jun 19 04:52:21 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 19 Jun 2021 00:52:21 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1279 - Still Failing! In-Reply-To: <1700893797.146.1623991247403.JavaMail.javamailuser@localhost> References: <1700893797.146.1623991247403.JavaMail.javamailuser@localhost> Message-ID: <584146155.154.1624078343577.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1279 Status: Still Failing Timestamp: 20210619T044357Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210619T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210619T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210619T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210619T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Sat Jun 19 04:52:25 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 19 Jun 2021 00:52:25 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 955 - Still Failing! In-Reply-To: <767608690.149.1623991250208.JavaMail.javamailuser@localhost> References: <767608690.149.1623991250208.JavaMail.javamailuser@localhost> Message-ID: <312597469.157.1624078346161.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 955 Status: Still Failing Timestamp: 20210619T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210619T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From nicolae.jascanu at intel.com Sat Jun 19 13:46:27 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Sat, 19 Jun 2021 13:46:27 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210619T020425Z Message-ID: Sanity Test from 2021-June-19 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210619T020425Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL SIMPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210619T020425Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sun Jun 20 04:41:21 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 20 Jun 2021 00:41:21 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1280 - Still Failing! In-Reply-To: <485389594.152.1624078338800.JavaMail.javamailuser@localhost> References: <485389594.152.1624078338800.JavaMail.javamailuser@localhost> Message-ID: <676549271.160.1624164082185.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1280 Status: Still Failing Timestamp: 20210620T043403Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210620T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210620T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210620T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210620T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Sun Jun 20 04:41:23 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 20 Jun 2021 00:41:23 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 956 - Still Failing! In-Reply-To: <1748752791.155.1624078344156.JavaMail.javamailuser@localhost> References: <1748752791.155.1624078344156.JavaMail.javamailuser@localhost> Message-ID: <635738459.163.1624164084090.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 956 Status: Still Failing Timestamp: 20210620T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210620T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From build.starlingx at gmail.com Mon Jun 21 04:41:52 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 21 Jun 2021 00:41:52 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1281 - Still Failing! In-Reply-To: <2059562518.158.1624164079745.JavaMail.javamailuser@localhost> References: <2059562518.158.1624164079745.JavaMail.javamailuser@localhost> Message-ID: <1638000897.166.1624250512953.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1281 Status: Still Failing Timestamp: 20210621T043423Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210621T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210621T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210621T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210621T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Mon Jun 21 04:41:54 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 21 Jun 2021 00:41:54 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 957 - Still Failing! In-Reply-To: <1304274636.161.1624164082616.JavaMail.javamailuser@localhost> References: <1304274636.161.1624164082616.JavaMail.javamailuser@localhost> Message-ID: <276025355.169.1624250515053.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 957 Status: Still Failing Timestamp: 20210621T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210621T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From openinfradn at gmail.com Mon Jun 21 11:10:42 2021 From: openinfradn at gmail.com (open infra) Date: Mon, 21 Jun 2021 16:40:42 +0530 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: thank you Bill and Thiago. Now I have switched to Release 5. Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack? for controllers: system host-label-assign $NODE openstack-control-plane=enabled For worker nodes: system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled Because these labels are not visible in the release 5 installation guide. On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill wrote: > Hi again Danishka, we discussed this too. > > > > It was suggested that you check /var/logs/armada to see if there are any > Armada startup logs that’d help understand what’s going on. > > > > Thanks, Bill... > > > > *From:* open infra > *Sent:* Saturday, May 22, 2021 2:28 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] stx-openstack application applying failed > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > > > I have deployed StarlingX R4 (bare metal dedicated storage installation). > > stx-openstack application applying was failed. > > > > When I retrieve openstack pods, I can see the status *osh-openstack-garbd-garbd-7d4957d9f4-kz95v > *is pending. > > I have re-uploaded stx-openstack but the same results. > > > > I highly appreciate it if someone can help to fix resolve this matter as > we have a demo next week. > > > > More details available here. > > describe pod osh-openstack-garbd-garbd > > http://paste.openstack.org/show/805587/ > > > describe nodes http://paste.openstack.org/show/805589/ > > > > > Regards, > > Danishka > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Jun 21 20:10:27 2021 From: scott.little at windriver.com (Scott Little) Date: Mon, 21 Jun 2021 16:10:27 -0400 Subject: [Starlingx-discuss] Slow repo sync times and the yocto kernel Message-ID: <1d1828bf-2aa4-f100-42ee-3d11d14f5af8@windriver.com> Hi all The yocto kernel git was added to the StarlingX manifests late last week.  Since then I've heard a lot of grumbling about slow 'repo sync' times.  It affects folks setting up a new distro or monolithic workspace for the first time.  The repo-sync time can exceed an hour as the entire history of the linux kernel is downloaded.  You will also notice an additional 5.5 GB of storage consumed to hold all this history.  Subsequent repo sync's should be fast. So the question is... what if anything do we do about it? Our options... 1) Leave it as is. Hope that folks are mostly working in the 'flock' or 'container' layers, and NOT using monolithic builds, and so the number of folk impacted is low.   Folk working at the distro layer or using monolithic can work on something else, or go to lunch, while they wait for the initial repo sync to complete. 2) Try to minimize the amount of kernel history we download through a manifest change.  Limiting the git history depth does the trick ...        The good ...  - repo sync time drops from ~1 hr to ~5 min  - storage drops from ~5.5 GB to ~ 5GB The bad ... - This is fragile.  It assumes that the desired rt sha can be reached from 100 commits from head of branch.  However, the connection to the upstream git server drops if we ask for much more than that. e.g. depth=500 is a guaranteed fail.  So upstream adds a few patches and we might start failing our repo-sync. - The history is incomplete, this may hinder kernel developers. A 'git fetch linux-yocto' should pull in the rest of the history, so probably not a blocker. 3) We could double the number of manifests at each layer. One would only pull in the minimal kernel history, and other the full history. 4) Create a mirror an a larger git server, like github, and hope that significantly improves the download speed. 5) Download a tarball of the yocto kernel, rather than pulling in it's git tree.  Yocto's git server doesn't seem to be set up to serve custom tarballs based on a requested sha.  We would have to set it all up manually, and it's not remotely convenient to kernel developers. Of these options, I'm leaning toward option 2, but look forward to hearing from the community. Scott From build.starlingx at gmail.com Tue Jun 22 04:41:42 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jun 2021 00:41:42 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1282 - Still Failing! In-Reply-To: <1357017803.164.1624250510415.JavaMail.javamailuser@localhost> References: <1357017803.164.1624250510415.JavaMail.javamailuser@localhost> Message-ID: <1649071989.172.1624336903031.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1282 Status: Still Failing Timestamp: 20210622T043418Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210622T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210622T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210622T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210622T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Tue Jun 22 04:41:44 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jun 2021 00:41:44 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 958 - Still Failing! In-Reply-To: <1615804701.167.1624250513382.JavaMail.javamailuser@localhost> References: <1615804701.167.1624250513382.JavaMail.javamailuser@localhost> Message-ID: <1753033869.175.1624336905373.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 958 Status: Still Failing Timestamp: 20210622T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210622T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From austin.sun at intel.com Tue Jun 22 06:36:56 2021 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 22 Jun 2021 06:36:56 +0000 Subject: [Starlingx-discuss] Cancel StarlingX Distro-OpenStack: Bi-weekly Project Meeting -- 06/22 Message-ID: Hi All: Cancel today openstack distro meeting due to conflict. Thanks. BR Austin Sun -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jun 22 06:36:22 2021 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 22 Jun 2021 06:36:22 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Distro-OpenStack: Bi-weekly Project Meeting(Summer Time) Message-ID: Hi folks, This is a new series of bi-weekly project meeting on StarlingX Distro-OpenStack. Your participation to this meeting and/or other offline contribution by all means are highly appreciated! Project Team Etherpad: https://etherpad.openstack.org/p/stx-distro-openstack-meetings The Summer Time Slot for this meeting : CST: 9:00 PM (China, Shanghai ) PST: 7:00 AM (US West , US, Oregon) EST: 9:00 AM (East Canada , Canada Ottawa) Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3559 bytes Desc: not available URL: From build.starlingx at gmail.com Tue Jun 22 14:40:56 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jun 2021 10:40:56 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1283 - Still Failing! In-Reply-To: <133691417.170.1624336899275.JavaMail.javamailuser@localhost> References: <133691417.170.1624336899275.JavaMail.javamailuser@localhost> Message-ID: <918253861.178.1624372857066.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1283 Status: Still Failing Timestamp: 20210622T144050Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210622T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210622T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210622T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210622T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Tue Jun 22 14:53:35 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jun 2021 10:53:35 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1284 - Still Failing! In-Reply-To: <944743935.176.1624372852332.JavaMail.javamailuser@localhost> References: <944743935.176.1624372852332.JavaMail.javamailuser@localhost> Message-ID: <1516748144.181.1624373616007.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1284 Status: Still Failing Timestamp: 20210622T144641Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210622T144247Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210622T144247Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210622T144247Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210622T144247Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Tue Jun 22 14:53:37 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jun 2021 10:53:37 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 959 - Still Failing! In-Reply-To: <1735004109.173.1624336903563.JavaMail.javamailuser@localhost> References: <1735004109.173.1624336903563.JavaMail.javamailuser@localhost> Message-ID: <1448281952.184.1624373618339.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 959 Status: Still Failing Timestamp: 20210622T144247Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210622T144247Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From nicolae.jascanu at intel.com Tue Jun 22 17:11:18 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Tue, 22 Jun 2021 17:11:18 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210619T020425Z Message-ID: Sanity Test from 2021-June-19 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210619T020425Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL DUPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210619T020425Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Tue Jun 22 17:11:33 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 22 Jun 2021 17:11:33 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210621T230343Z Message-ID: Sanity Test from 2021-June-21 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210621T230343Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210621T230343Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 89 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 5155 bytes Desc: image003.png URL: From build.starlingx at gmail.com Wed Jun 23 04:40:44 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 23 Jun 2021 00:40:44 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1285 - Still Failing! In-Reply-To: <433198850.179.1624373612959.JavaMail.javamailuser@localhost> References: <433198850.179.1624373612959.JavaMail.javamailuser@localhost> Message-ID: <281076814.187.1624423245717.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1285 Status: Still Failing Timestamp: 20210623T043402Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210623T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210623T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210623T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210623T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed Jun 23 04:40:47 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 23 Jun 2021 00:40:47 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 960 - Still Failing! In-Reply-To: <1892301150.182.1624373616584.JavaMail.javamailuser@localhost> References: <1892301150.182.1624373616584.JavaMail.javamailuser@localhost> Message-ID: <593996769.190.1624423249490.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 960 Status: Still Failing Timestamp: 20210623T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210623T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From alexandru.dimofte at intel.com Wed Jun 23 07:45:10 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 23 Jun 2021 07:45:10 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210623T013446Z Message-ID: Sanity Test from 2021-June-23 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210623T013446Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210623T013446Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 71 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 83 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Bill.Zvonar at windriver.com Wed Jun 23 13:42:41 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 23 Jun 2021 13:42:41 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 23, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community coming up in a few minutes. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210623T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Bill.Zvonar at windriver.com Wed Jun 23 15:06:53 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 23 Jun 2021 15:06:53 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 23, 2021) In-Reply-To: References: Message-ID: From today's call... * Standing Topics * Build/Sanity * sanity all green since last week * several build issues - all intermittent? * layered builds have been fine, the issues have been with the monolithic builds – planning to keep the monolithic builds for the time being * Gerrit Reviews in Need of Attention * nothing this week * Topics for this Week * Slow repo sync times and the yocto kernel * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011630.html * Scott & Mark agreed that option 2 is the way to go, at least to begin with * ARs from Previous Meetings * nothing this week * Open Requests for Help * stx-openstack application applying failed * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011629.html * Greg will respond * Build Matters (if required) * nothing this week -----Original Message----- From: Zvonar, Bill Sent: Wednesday, June 23, 2021 9:43 AM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (June 23, 2021) Hi all, reminder of the weekly TSC/Community coming up in a few minutes. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210623T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From openinfradn at gmail.com Wed Jun 23 15:22:42 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 23 Jun 2021 20:52:42 +0530 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: Here is more information about the issue. http://paste.openstack.org/show/806872/ Then I set the openstack-compute-node label to the controller-0 and re-apply stx-openstack (just to test). Then stx-openstack applying progress continued up to 55%. I can lock/unlcok the worker-0 via controller nodes. So, it should not be a problem with management network. On Mon, Jun 21, 2021 at 4:40 PM open infra wrote: > thank you Bill and Thiago. > Now I have switched to Release 5. > Don't we need to set following labels for release 5 deployment if we > supposed to deploy stx-openstack? > > for controllers: > > system host-label-assign $NODE openstack-control-plane=enabled > > For worker nodes: > > system host-label-assign $NODE openstack-compute-node=enabled > system host-label-assign $NODE openvswitch=enabled > > Because these labels are not visible in the release 5 installation guide. > > On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill > wrote: > >> Hi again Danishka, we discussed this too. >> >> >> >> It was suggested that you check /var/logs/armada to see if there are any >> Armada startup logs that’d help understand what’s going on. >> >> >> >> Thanks, Bill... >> >> >> >> *From:* open infra >> *Sent:* Saturday, May 22, 2021 2:28 AM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] stx-openstack application applying failed >> >> >> >> [Please note: This e-mail is from an EXTERNAL e-mail address] >> >> Hi, >> >> >> >> I have deployed StarlingX R4 (bare metal dedicated storage installation). >> >> stx-openstack application applying was failed. >> >> >> >> When I retrieve openstack pods, I can see the status *osh-openstack-garbd-garbd-7d4957d9f4-kz95v >> *is pending. >> >> I have re-uploaded stx-openstack but the same results. >> >> >> >> I highly appreciate it if someone can help to fix resolve this matter as >> we have a demo next week. >> >> >> >> More details available here. >> >> describe pod osh-openstack-garbd-garbd >> >> http://paste.openstack.org/show/805587/ >> >> >> describe nodes http://paste.openstack.org/show/805589/ >> >> >> >> >> Regards, >> >> Danishka >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unnamed.png Type: image/png Size: 34246 bytes Desc: not available URL: From maryx.camp at intel.com Thu Jun 24 01:32:24 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 24 Jun 2021 01:32:24 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 23-Jun-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 23-Jun-21 All -- reviews merged since last meeting: 11 (mostly cherry picks) Status/questions/opens Cherry pick meetings - we have had 3 sessions this week and are making good progress on the list. We're motivated to continue and knock it out as soon as we can. Mary's AR from last week asking for retrospective input - not done yet. Will wait till after cherry pick process is over. The rest of the meeting time was used for cherry pick discussion. From build.starlingx at gmail.com Thu Jun 24 04:41:29 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 24 Jun 2021 00:41:29 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1286 - Still Failing! In-Reply-To: <959999606.185.1624423242186.JavaMail.javamailuser@localhost> References: <959999606.185.1624423242186.JavaMail.javamailuser@localhost> Message-ID: <1925312098.193.1624509689805.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1286 Status: Still Failing Timestamp: 20210624T043415Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210624T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210624T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210624T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210624T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Thu Jun 24 04:41:31 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 24 Jun 2021 00:41:31 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 961 - Still Failing! In-Reply-To: <678582114.188.1624423246221.JavaMail.javamailuser@localhost> References: <678582114.188.1624423246221.JavaMail.javamailuser@localhost> Message-ID: <1230548245.196.1624509691984.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 961 Status: Still Failing Timestamp: 20210624T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210624T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From openinfradn at gmail.com Thu Jun 24 08:29:04 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 24 Jun 2021 13:59:04 +0530 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: I managed to deploy stx-monitoring that require labelling only in controller nodes. Definitely something wrong with worker-0 labelling On Wed, Jun 23, 2021 at 8:52 PM open infra wrote: > Here is more information about the issue. > http://paste.openstack.org/show/806872/ > > Then I set the openstack-compute-node label to the controller-0 and > re-apply stx-openstack (just to test). > Then stx-openstack applying progress continued up to 55%. > > I can lock/unlcok the worker-0 via controller nodes. So, it should not be > a problem with management network. > > > On Mon, Jun 21, 2021 at 4:40 PM open infra wrote: > >> thank you Bill and Thiago. >> Now I have switched to Release 5. >> Don't we need to set following labels for release 5 deployment if we >> supposed to deploy stx-openstack? >> >> for controllers: >> >> system host-label-assign $NODE openstack-control-plane=enabled >> >> For worker nodes: >> >> system host-label-assign $NODE openstack-compute-node=enabled >> system host-label-assign $NODE openvswitch=enabled >> >> Because these labels are not visible in the release 5 installation guide. >> >> On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill >> wrote: >> >>> Hi again Danishka, we discussed this too. >>> >>> >>> >>> It was suggested that you check /var/logs/armada to see if there are >>> any Armada startup logs that’d help understand what’s going on. >>> >>> >>> >>> Thanks, Bill... >>> >>> >>> >>> *From:* open infra >>> *Sent:* Saturday, May 22, 2021 2:28 AM >>> *To:* starlingx-discuss at lists.starlingx.io >>> *Subject:* [Starlingx-discuss] stx-openstack application applying failed >>> >>> >>> >>> [Please note: This e-mail is from an EXTERNAL e-mail address] >>> >>> Hi, >>> >>> >>> >>> I have deployed StarlingX R4 (bare metal dedicated storage installation). >>> >>> stx-openstack application applying was failed. >>> >>> >>> >>> When I retrieve openstack pods, I can see the status *osh-openstack-garbd-garbd-7d4957d9f4-kz95v >>> *is pending. >>> >>> I have re-uploaded stx-openstack but the same results. >>> >>> >>> >>> I highly appreciate it if someone can help to fix resolve this matter as >>> we have a demo next week. >>> >>> >>> >>> More details available here. >>> >>> describe pod osh-openstack-garbd-garbd >>> >>> http://paste.openstack.org/show/805587/ >>> >>> >>> describe nodes http://paste.openstack.org/show/805589/ >>> >>> >>> >>> >>> Regards, >>> >>> Danishka >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alexander.Williams at commscope.com Thu Jun 24 19:15:49 2021 From: Alexander.Williams at commscope.com (Williams, Alexander) Date: Thu, 24 Jun 2021 19:15:49 +0000 Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) In-Reply-To: References: Message-ID: Hi Bart, Would you happen to know in which files the UUID is generated, and where it gets shared to the other hosts? Is the UUID the only thing connecting the hosts, or is there other information along with it? Thanks, Alex From: Wensley, Barton Sent: Thursday, June 10, 2021 3:25 PM To: Williams, Alexander ; Waines, Greg ; starlingx-discuss at lists.starlingx.io Subject: RE: Adding hosts (Bare Metal AIO Duplex) Alex, I don't think you will be able to pre-install controller-1 and then add it to the system. When the first controller is installed, a unique UUID is generated. That UUID is then copied on to each Alex, I don't think you will be able to pre-install controller-1 and then add it to the system. When the first controller is installed, a unique UUID is generated. That UUID is then copied on to each host in the system as it is installed. I'm pretty sure that if you were to pre-install a host (e.g. from an ISO), the UUID will not match and when it boots it will fail to initialize (there will be a configuration failure and the services won't come up). Bart From: Williams, Alexander > Sent: Thursday, June 10, 2021 3:07 PM To: Waines, Greg >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Greg, Thanks for your response! I'll be giving the host-add a shot. The reason I asked is because pre-installing the image potentially provides a speedup to deployment times. Installing the images beforehand would cut down on the downtime waiting for the second controller to be provisioned without personality and on total time assuming that the images for both controllers are installed simultaneously. Best, Alex From: Waines, Greg > Sent: Thursday, June 10, 2021 1:31 PM To: Williams, Alexander >; starlingx-discuss at lists.starlingx.io Subject: RE: Adding hosts (Bare Metal AIO Duplex) Hey Alex, For question 1 your understanding of the "host-update" use case is correct i.e. power on host which dhcp's on mgmt. network host gets auto discovered by controller-0 and auto-provisioned wit Hey Alex, * For question 1 * your understanding of the "host-update" use case is correct i.e. * power on host which dhcp's on mgmt. network * host gets auto discovered by controller-0 and auto-provisioned without personality * user uses host-update to set personality * controller-0 installs software for that personality * "host-add" use case is sort of the opposite ... configure host first, then power it on i.e. * user uses "host-add" command and configures host in system's inventory with identifying information such as BMC IP Address, mgmt. network MAC, etc., and the host's personality * user uses "host-power-on" command to power on the host via the BMC * host powers on, dhcp's on mgmt. network * gets recognized by controller-0 from previously configured host info (e.g. mgmt. MAC, ...) * controller-0 installs software for the previously configured personality of this host. * For question 2 * pretty sure answer is no * I believe starlingx sysinv/mtce/swmgmt software will always want to install software on a new host * * ... although, thinking of question 1, you could try doing a host-add with the identifying information of controller-1, and power on controller-1 and see if controller-0 will try to re-install or not * no matter what there will be software versioning checks that happen at boot time to ensure controller-1 is running the same software as controller-0 and is patch current based on controller-0's applied patches * * ... however, why do you want to do this ? * like is this a real use case ? Greg. From: Williams, Alexander > Sent: Monday, June 7, 2021 12:23 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, My current understanding is that whenever adding a host (esp. controller-1) using the system host-update command after a PXE boot, StarlingX will install the base image and then perform the configuration steps to make it a controller, worker, etc., overwriting anything that was previously installed on the machine. 1. Is my understanding of host-update correct, or am I missing something important here? Does host-add do the same thing, but gets run on controller-0 before booting and not after? 2. If I install the StarlingX image on a server that will become controller-1, is there any way to add it to the host list of controller-0 and configure its personality without the server reinstalling StarlingX? Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Fri Jun 25 01:32:36 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 24 Jun 2021 21:32:36 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_compiler_master_master - Build # 601 - Failure! Message-ID: <1941659302.203.1624584757008.JavaMail.javamailuser@localhost> Project: STX_build_layer_compiler_master_master Build #: 601 Status: Failure Timestamp: 20210625T013006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20210625T013006Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Fri Jun 25 04:31:24 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 25 Jun 2021 00:31:24 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 962 - Still Failing! In-Reply-To: <1735676456.194.1624509690329.JavaMail.javamailuser@localhost> References: <1735676456.194.1624509690329.JavaMail.javamailuser@localhost> Message-ID: <1594189049.210.1624595485057.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 962 Status: Still Failing Timestamp: 20210625T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210625T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From nicolae.jascanu at intel.com Fri Jun 25 12:03:14 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Fri, 25 Jun 2021 12:03:14 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210624T013431Z Message-ID: Sanity Test from 2021-June-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210624T013431Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL STANDARD Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210624T013431Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Fri Jun 25 14:29:45 2021 From: openinfradn at gmail.com (open infra) Date: Fri, 25 Jun 2021 19:59:45 +0530 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: Finally managed to deploy OpenStack but not sure what caused the issue. Increased disk capacity for docker in worker and reviewed network. On Thu, Jun 24, 2021 at 1:59 PM open infra wrote: > I managed to deploy stx-monitoring that require labelling only in > controller nodes. > Definitely something wrong with worker-0 labelling > > > On Wed, Jun 23, 2021 at 8:52 PM open infra wrote: > >> Here is more information about the issue. >> http://paste.openstack.org/show/806872/ >> >> Then I set the openstack-compute-node label to the controller-0 and >> re-apply stx-openstack (just to test). >> Then stx-openstack applying progress continued up to 55%. >> >> I can lock/unlcok the worker-0 via controller nodes. So, it should not be >> a problem with management network. >> >> >> On Mon, Jun 21, 2021 at 4:40 PM open infra wrote: >> >>> thank you Bill and Thiago. >>> Now I have switched to Release 5. >>> Don't we need to set following labels for release 5 deployment if we >>> supposed to deploy stx-openstack? >>> >>> for controllers: >>> >>> system host-label-assign $NODE openstack-control-plane=enabled >>> >>> For worker nodes: >>> >>> system host-label-assign $NODE openstack-compute-node=enabled >>> system host-label-assign $NODE openvswitch=enabled >>> >>> Because these labels are not visible in the release 5 installation guide. >>> >>> On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill >>> wrote: >>> >>>> Hi again Danishka, we discussed this too. >>>> >>>> >>>> >>>> It was suggested that you check /var/logs/armada to see if there are >>>> any Armada startup logs that’d help understand what’s going on. >>>> >>>> >>>> >>>> Thanks, Bill... >>>> >>>> >>>> >>>> *From:* open infra >>>> *Sent:* Saturday, May 22, 2021 2:28 AM >>>> *To:* starlingx-discuss at lists.starlingx.io >>>> *Subject:* [Starlingx-discuss] stx-openstack application applying >>>> failed >>>> >>>> >>>> >>>> [Please note: This e-mail is from an EXTERNAL e-mail address] >>>> >>>> Hi, >>>> >>>> >>>> >>>> I have deployed StarlingX R4 (bare metal dedicated storage >>>> installation). >>>> >>>> stx-openstack application applying was failed. >>>> >>>> >>>> >>>> When I retrieve openstack pods, I can see the status *osh-openstack-garbd-garbd-7d4957d9f4-kz95v >>>> *is pending. >>>> >>>> I have re-uploaded stx-openstack but the same results. >>>> >>>> >>>> >>>> I highly appreciate it if someone can help to fix resolve this matter >>>> as we have a demo next week. >>>> >>>> >>>> >>>> More details available here. >>>> >>>> describe pod osh-openstack-garbd-garbd >>>> >>>> http://paste.openstack.org/show/805587/ >>>> >>>> >>>> describe nodes http://paste.openstack.org/show/805589/ >>>> >>>> >>>> >>>> >>>> Regards, >>>> >>>> Danishka >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Fri Jun 25 15:00:18 2021 From: openinfradn at gmail.com (open infra) Date: Fri, 25 Jun 2021 20:30:18 +0530 Subject: [Starlingx-discuss] Connectivity between worker and data network Message-ID: Hi, After creating the data network, the data network topology doen't look like [1]. Data network is created as per the installation guide. [1] https://docs.starlingx.io/datanet/kubernetes/the-data-network-topology-view.html Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Fri Jun 25 17:43:09 2021 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Fri, 25 Jun 2021 17:43:09 +0000 Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) In-Reply-To: References: Message-ID: Alex, The UUID is stored in /www/pages/feed//install_uuid. It is generated by the anaconda kickstarts run by the installer (see https://opendev.org/starlingx/metal/src/branch/master/bsp-files/kickstarts). Each host retrieves the install_uuid from the active controller when the installer runs on that host (also done in the kickstarts). I don't know what else might go wrong if you don't install your additional hosts from the active controller. We don't support this or test it so it you would be breaking new ground. Bart From: Williams, Alexander Sent: Thursday, June 24, 2021 3:16 PM To: Wensley, Barton ; starlingx-discuss at lists.starlingx.io Subject: RE: Adding hosts (Bare Metal AIO Duplex) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Bart, Would you happen to know in which files the UUID is generated, and where it gets shared to the other hosts? Is the UUID the only thing connecting the hosts, or is there other information along with it? Thanks, Alex From: Wensley, Barton > Sent: Thursday, June 10, 2021 3:25 PM To: Williams, Alexander >; Waines, Greg >; starlingx-discuss at lists.starlingx.io Subject: RE: Adding hosts (Bare Metal AIO Duplex) Alex, I don't think you will be able to pre-install controller-1 and then add it to the system. When the first controller is installed, a unique UUID is generated. That UUID is then copied on to each Alex, I don't think you will be able to pre-install controller-1 and then add it to the system. When the first controller is installed, a unique UUID is generated. That UUID is then copied on to each host in the system as it is installed. I'm pretty sure that if you were to pre-install a host (e.g. from an ISO), the UUID will not match and when it boots it will fail to initialize (there will be a configuration failure and the services won't come up). Bart From: Williams, Alexander > Sent: Thursday, June 10, 2021 3:07 PM To: Waines, Greg >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Greg, Thanks for your response! I'll be giving the host-add a shot. The reason I asked is because pre-installing the image potentially provides a speedup to deployment times. Installing the images beforehand would cut down on the downtime waiting for the second controller to be provisioned without personality and on total time assuming that the images for both controllers are installed simultaneously. Best, Alex From: Waines, Greg > Sent: Thursday, June 10, 2021 1:31 PM To: Williams, Alexander >; starlingx-discuss at lists.starlingx.io Subject: RE: Adding hosts (Bare Metal AIO Duplex) Hey Alex, For question 1 your understanding of the "host-update" use case is correct i.e. power on host which dhcp's on mgmt. network host gets auto discovered by controller-0 and auto-provisioned wit Hey Alex, * For question 1 * your understanding of the "host-update" use case is correct i.e. * power on host which dhcp's on mgmt. network * host gets auto discovered by controller-0 and auto-provisioned without personality * user uses host-update to set personality * controller-0 installs software for that personality * "host-add" use case is sort of the opposite ... configure host first, then power it on i.e. * user uses "host-add" command and configures host in system's inventory with identifying information such as BMC IP Address, mgmt. network MAC, etc., and the host's personality * user uses "host-power-on" command to power on the host via the BMC * host powers on, dhcp's on mgmt. network * gets recognized by controller-0 from previously configured host info (e.g. mgmt. MAC, ...) * controller-0 installs software for the previously configured personality of this host. * For question 2 * pretty sure answer is no * I believe starlingx sysinv/mtce/swmgmt software will always want to install software on a new host * * ... although, thinking of question 1, you could try doing a host-add with the identifying information of controller-1, and power on controller-1 and see if controller-0 will try to re-install or not * no matter what there will be software versioning checks that happen at boot time to ensure controller-1 is running the same software as controller-0 and is patch current based on controller-0's applied patches * * ... however, why do you want to do this ? * like is this a real use case ? Greg. From: Williams, Alexander > Sent: Monday, June 7, 2021 12:23 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Adding hosts (Bare Metal AIO Duplex) [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, My current understanding is that whenever adding a host (esp. controller-1) using the system host-update command after a PXE boot, StarlingX will install the base image and then perform the configuration steps to make it a controller, worker, etc., overwriting anything that was previously installed on the machine. 1. Is my understanding of host-update correct, or am I missing something important here? Does host-add do the same thing, but gets run on controller-0 before booting and not after? 2. If I install the StarlingX image on a server that will become controller-1, is there any way to add it to the host list of controller-0 and configure its personality without the server reinstalling StarlingX? Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Fri Jun 25 17:49:47 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 25 Jun 2021 13:49:47 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 963 - Still Failing! In-Reply-To: <1433394552.208.1624595483553.JavaMail.javamailuser@localhost> References: <1433394552.208.1624595483553.JavaMail.javamailuser@localhost> Message-ID: <1590192554.223.1624643387710.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 963 Status: Still Failing Timestamp: 20210625T174827Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210625T174827Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From scott.little at windriver.com Fri Jun 25 19:39:41 2021 From: scott.little at windriver.com (Scott Little) Date: Fri, 25 Jun 2021 15:39:41 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 963 - Still Failing! In-Reply-To: <1590192554.223.1624643387710.JavaMail.javamailuser@localhost> References: <1433394552.208.1624595483553.JavaMail.javamailuser@localhost> <1590192554.223.1624643387710.JavaMail.javamailuser@localhost> Message-ID: It seems repo sync died upon trying to download the original monolithic linux-yocto git, and left some corruption behind that blocked subsequent repo sync attempts. Deleting the corrupt repo/git metadata pertaining to linux-yocto, allowed the new single branch with minimal history download of linux-yocto to succeed. In case any others are stuck on this, the cleanup procedure is    rm -rf .repo/project-objects/linux-yocto.git* .repo/projects/cgcs-root/stx/git/linux-yocto-*    repo sync --force-sync -j20 Scott On 2021-06-25 1:49 p.m., build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_master_master > Build #: 963 > Status: Still Failing > Timestamp: 20210625T174827Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210625T174827Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Sat Jun 26 07:21:43 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Sat, 26 Jun 2021 07:21:43 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210624T013431Z Message-ID: Sanity Test from 2021-June-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210624T013431Z/outputs/iso/ ) Status: GREEN Executed on BARE METAL DUPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210624T013431Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Sat Jun 26 08:16:30 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Sat, 26 Jun 2021 08:16:30 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210626T021112Z Message-ID: Sanity Test from 2021-June-26 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210626T021112Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL SIMPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210626T021112Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Sat Jun 26 13:12:22 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Sat, 26 Jun 2021 13:12:22 +0000 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: … just following up on comment that these labels are not visible in the release 5 installation guide. I checked the worker node section of the following guides, and it appears these labels are described there: * https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/aio_duplex_extend.html#configure-worker-nodes * https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/controller_storage_install_kubernetes.html#configure-worker-nodes * https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/dedicated_storage_install_kubernetes.html#configure-worker-nodes * https://docs.starlingx.io/deploy_install_guides/r5_release/virtual/controller_storage_install_kubernetes.html#configure-worker-nodes * https://docs.starlingx.io/deploy_install_guides/r5_release/virtual/dedicated_storage_install_kubernetes.html#configure-worker-nodes Can you let us know which page you thought the labels were missing from ? Greg. From: open infra Sent: Monday, June 21, 2021 7:11 AM To: Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] thank you Bill and Thiago. Now I have switched to Release 5. Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack? for controllers: system host-label-assign $NODE openstack-control-plane=enabled For worker nodes: system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled Because these labels are not visible in the release 5 installation guide. On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill > wrote: Hi again Danishka, we discussed this too. It was suggested that you check /var/logs/armada to see if there are any Armada startup logs that’d help understand what’s going on. Thanks, Bill... From: open infra > Sent: Saturday, May 22, 2021 2:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I have deployed StarlingX R4 (bare metal dedicated storage installation). stx-openstack application applying was failed. When I retrieve openstack pods, I can see the status osh-openstack-garbd-garbd-7d4957d9f4-kz95v is pending. I have re-uploaded stx-openstack but the same results. I highly appreciate it if someone can help to fix resolve this matter as we have a demo next week. More details available here. describe pod osh-openstack-garbd-garbd http://paste.openstack.org/show/805587/ describe nodes http://paste.openstack.org/show/805589/ Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Sat Jun 26 13:16:00 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Sat, 26 Jun 2021 13:16:00 +0000 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: Which deployment type are you configuring ? and which install guide are you following ? Also where did you find documentation on stx-monitoring ? I don’t believe this is being maintained any longer. Greg. From: open infra Sent: Thursday, June 24, 2021 4:29 AM To: Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] I managed to deploy stx-monitoring that require labelling only in controller nodes. Definitely something wrong with worker-0 labelling On Wed, Jun 23, 2021 at 8:52 PM open infra > wrote: Here is more information about the issue. http://paste.openstack.org/show/806872/ Then I set the openstack-compute-node label to the controller-0 and re-apply stx-openstack (just to test). Then stx-openstack applying progress continued up to 55%. I can lock/unlcok the worker-0 via controller nodes. So, it should not be a problem with management network. On Mon, Jun 21, 2021 at 4:40 PM open infra > wrote: thank you Bill and Thiago. Now I have switched to Release 5. Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack? for controllers: system host-label-assign $NODE openstack-control-plane=enabled For worker nodes: system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled Because these labels are not visible in the release 5 installation guide. On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill > wrote: Hi again Danishka, we discussed this too. It was suggested that you check /var/logs/armada to see if there are any Armada startup logs that’d help understand what’s going on. Thanks, Bill... From: open infra > Sent: Saturday, May 22, 2021 2:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I have deployed StarlingX R4 (bare metal dedicated storage installation). stx-openstack application applying was failed. When I retrieve openstack pods, I can see the status osh-openstack-garbd-garbd-7d4957d9f4-kz95v is pending. I have re-uploaded stx-openstack but the same results. I highly appreciate it if someone can help to fix resolve this matter as we have a demo next week. More details available here. describe pod osh-openstack-garbd-garbd http://paste.openstack.org/show/805587/ describe nodes http://paste.openstack.org/show/805589/ Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Sat Jun 26 13:17:36 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Sat, 26 Jun 2021 13:17:36 +0000 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: Agreed that increased docker fs is not documented well … especially if you need to increase the cgts_vg logical volume group in order to increase the docker filesystem size. We have plans to fix this. Greg. From: open infra Sent: Friday, June 25, 2021 10:30 AM To: Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Finally managed to deploy OpenStack but not sure what caused the issue. Increased disk capacity for docker in worker and reviewed network. On Thu, Jun 24, 2021 at 1:59 PM open infra > wrote: I managed to deploy stx-monitoring that require labelling only in controller nodes. Definitely something wrong with worker-0 labelling On Wed, Jun 23, 2021 at 8:52 PM open infra > wrote: Here is more information about the issue. http://paste.openstack.org/show/806872/ Then I set the openstack-compute-node label to the controller-0 and re-apply stx-openstack (just to test). Then stx-openstack applying progress continued up to 55%. I can lock/unlcok the worker-0 via controller nodes. So, it should not be a problem with management network. On Mon, Jun 21, 2021 at 4:40 PM open infra > wrote: thank you Bill and Thiago. Now I have switched to Release 5. Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack? for controllers: system host-label-assign $NODE openstack-control-plane=enabled For worker nodes: system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled Because these labels are not visible in the release 5 installation guide. On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill > wrote: Hi again Danishka, we discussed this too. It was suggested that you check /var/logs/armada to see if there are any Armada startup logs that’d help understand what’s going on. Thanks, Bill... From: open infra > Sent: Saturday, May 22, 2021 2:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I have deployed StarlingX R4 (bare metal dedicated storage installation). stx-openstack application applying was failed. When I retrieve openstack pods, I can see the status osh-openstack-garbd-garbd-7d4957d9f4-kz95v is pending. I have re-uploaded stx-openstack but the same results. I highly appreciate it if someone can help to fix resolve this matter as we have a demo next week. More details available here. describe pod osh-openstack-garbd-garbd http://paste.openstack.org/show/805587/ describe nodes http://paste.openstack.org/show/805589/ Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Sat Jun 26 15:13:32 2021 From: openinfradn at gmail.com (open infra) Date: Sat, 26 Jun 2021 20:43:32 +0530 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: Hi Greg, Thanks for your reply. I was following Standard with Dedicated Storage and can't remember exact URL. I even did search for "system host-label-assign" but may I made a mistake while searching for labels, apologies for the inconvenience. I found stx-monitor when I was looking for performance and usage monitoring and ended up with StarlingX training slide set. Regards, Danishka On Sat, Jun 26, 2021 at 6:47 PM Waines, Greg wrote: > Agreed that increased docker fs is not documented well … especially if you > need to increase the cgts_vg logical volume group in order to increase the > docker filesystem size. We have plans to fix this. > > > > Greg. > > > > *From:* open infra > *Sent:* Friday, June 25, 2021 10:30 AM > *To:* Zvonar, Bill > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] stx-openstack application applying > failed > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Finally managed to deploy OpenStack but not sure what caused the issue. > > Increased disk capacity for docker in worker and reviewed network. > > > > On Thu, Jun 24, 2021 at 1:59 PM open infra wrote: > > I managed to deploy stx-monitoring that require labelling only in > controller nodes. > > Definitely something wrong with worker-0 labelling > > > > > > On Wed, Jun 23, 2021 at 8:52 PM open infra wrote: > > Here is more information about the issue. > > http://paste.openstack.org/show/806872/ > > > > > Then I set the openstack-compute-node label to the controller-0 and > re-apply stx-openstack (just to test). > > Then stx-openstack applying progress continued up to 55%. > > > > I can lock/unlcok the worker-0 via controller nodes. So, it should not be > a problem with management network. > > > > > > On Mon, Jun 21, 2021 at 4:40 PM open infra wrote: > > thank you Bill and Thiago. > > Now I have switched to Release 5. > > Don't we need to set following labels for release 5 deployment if we > supposed to deploy stx-openstack? > > > > for controllers: > > > > system host-label-assign $NODE openstack-control-plane=enabled > > > > For worker nodes: > > > > system host-label-assign $NODE openstack-compute-node=enabled > > system host-label-assign $NODE openvswitch=enabled > > > > Because these labels are not visible in the release 5 installation guide. > > > > On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill > wrote: > > Hi again Danishka, we discussed this too. > > > > It was suggested that you check /var/logs/armada to see if there are any > Armada startup logs that’d help understand what’s going on. > > > > Thanks, Bill... > > > > *From:* open infra > *Sent:* Saturday, May 22, 2021 2:28 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] stx-openstack application applying failed > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > > > I have deployed StarlingX R4 (bare metal dedicated storage installation). > > stx-openstack application applying was failed. > > > > When I retrieve openstack pods, I can see the status *osh-openstack-garbd-garbd-7d4957d9f4-kz95v > *is pending. > > I have re-uploaded stx-openstack but the same results. > > > > I highly appreciate it if someone can help to fix resolve this matter as > we have a demo next week. > > > > More details available here. > > describe pod osh-openstack-garbd-garbd > > http://paste.openstack.org/show/805587/ > > > describe nodes http://paste.openstack.org/show/805589/ > > > > > Regards, > > Danishka > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Sat Jun 26 15:37:03 2021 From: lists at optimcloud.com (Embedded Devel) Date: Sat, 26 Jun 2021 15:37:03 +0000 Subject: [Starlingx-discuss] Download images and push FAIL Message-ID: <1624721547578.196637701.2558704178@optimcloud.com> stx 5.0 simplex on bare-metal seems some images "tiller" not found. TASK [common/push-docker-images : Download images and push to local registry] ********************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 1, "stderr": "Traceback (most recent call last):\n File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1624706034.05-101836802298463/download_images.py\", line 144, in \n raise Exception(\"Failed to download images %s\" % failed_downloads)\nException: Failed to download images ['k8s.gcr.io/kube-proxy:v1.18.1', 'gcr.io/kubernetes-helm/tiller:v2.16.1']\n", "stderr_lines": ["Traceback (most recent call last):", " File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1624706034.05-101836802298463/download_images.py\", line 144, in ", " raise Exception(\"Failed to download images %s\" % failed_downloads)", "Exception: Failed to download images ['k8s.gcr.io/kube-proxy:v1.18.1', 'gcr.io/kubernetes-helm/tiller:v2.16.1']"], "stdout": "Image is up to date for sha256:a595af0107f98768274e9143be61c7c80a8df2505ced520c9160f4e16ed42cd1\nImage is up to date for sha256:d1ccdd18e6ed8d91e3754e90c4b6cee42750ba165c75d3c78b4a31f057dd0423\nImage is up to date for sha256:6c9320041a7b5d00da54dda3a6bf9d6983b432ca5245a8254c83fc694d023810\nImage is up to date for sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c\nImage is up to date for sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f\nImage is up to date for sha256:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5\nImage is up to date for sha256:cb6799752c46cb16c0c5bebcb355e988a1cd1b3745ce8b67e004abe9a81340a8\nImage is up to date for sha256:fc05bc4225f39e81dcbd0035457977276ed8a6054e6bde6406e39b672e9724f5\nImage is up to date for sha256:53aa421faf0acd88f8a4cb113e9db2cc65b2a2954640ed96a56c4b94233674d8\nImage is up to date for sha256:98793d0a88c823c4fc0fb1b3833d12932be270fc4b6d62bc181f0f54413fe12d\nImage is up to date for sha256:7cf8e2d1b7337338a3f977d07abdf63d80d5005458dfab3ab8962c2bab99d40d\nImage is up to date for sha256:f2a1744e620d3bf673f8351dcfaa5334fe4888cfcd5476b0222499ccc1b158fe\nImage is up to date for sha256:a2bef2b25274b1acbdfde5e2f3de432475e15d4036b2108cea2dce968b0c29ea\nImage is up to date for sha256:3061a8a540ac0dee710c1b37edad7855581dc027a18890759c91792615505f13\nImage is up to date for sha256:fb95693fe5c67fc46893c1735d85f00f462dc7f5254f5ddbcbbe85fd4ef71717\nImage download succeeded: k8s.gcr.io/kube-apiserver:v1.18.1\nImage push succeeded: registry.local:9001/k8s.gcr.io/kube-apiserver:v1.18.1\nImage k8s.gcr.io/kube-apiserver:v1.18.1 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/kube-controller-manager:v1.18.1\nImage push succeeded: registry.local:9001/k8s.gcr.io/kube-controller-manager:v1.18.1\nImage k8s.gcr.io/kube-controller-manager:v1.18.1 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/kube-scheduler:v1.18.1\nImage push succeeded: registry.local:9001/k8s.gcr.io/kube-scheduler:v1.18.1\nImage k8s.gcr.io/kube-scheduler:v1.18.1 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/kube-proxy:v1.18.1\n Image download failed: k8s.gcr.io/kube-proxy:v1.18.1404 Client Error: Not Found (\"No such image: k8s.gcr.io/kube-proxy:v1.18.1\")\nImage download succeeded: k8s.gcr.io/pause:3.2\nImage push succeeded: registry.local:9001/k8s.gcr.io/pause:3.2\nImage k8s.gcr.io/pause:3.2 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/etcd:3.4.3-0\nImage push succeeded: registry.local:9001/k8s.gcr.io/etcd:3.4.3-0\nImage k8s.gcr.io/etcd:3.4.3-0 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/coredns:1.6.7\nImage push succeeded: registry.local:9001/k8s.gcr.io/coredns:1.6.7\nImage k8s.gcr.io/coredns:1.6.7 download succeeded by containerd\n Image download failed: quay.io/calico/cni:v3.12.0500 Server Error: Internal Server Error (\"Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\")\nSleep 20s before retry downloading image quay.io/calico/cni:v3.12.0 ...\nImage download succeeded: quay.io/calico/cni:v3.12.0\nImage push succeeded: registry.local:9001/quay.io/calico/cni:v3.12.0\nImage quay.io/calico/cni:v3.12.0 download succeeded by containerd\nImage download succeeded: quay.io/calico/node:v3.12.0\nImage push succeeded: registry.local:9001/quay.io/calico/node:v3.12.0\nImage quay.io/calico/node:v3.12.0 download succeeded by containerd\nImage download succeeded: quay.io/calico/kube-controllers:v3.12.0\nImage push succeeded: registry.local:9001/quay.io/calico/kube-controllers:v3.12.0\nImage quay.io/calico/kube-controllers:v3.12.0 download succeeded by containerd\nImage download succeeded: quay.io/calico/pod2daemon-flexvol:v3.12.0\nImage push succeeded: registry.local:9001/quay.io/calico/pod2daemon-flexvol:v3.12.0\nImage quay.io/calico/pod2daemon-flexvol:v3.12.0 download succeeded by containerd\nImage download succeeded: docker.io/nfvpe/multus:v3.4\nImage push succeeded: registry.local:9001/docker.io/nfvpe/multus:v3.4\nImage docker.io/nfvpe/multus:v3.4 download succeeded by containerd\nImage download succeeded: docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8\nImage push succeeded: registry.local:9001/docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8\nImage docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8 download succeeded by containerd\nImage download succeeded: docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae\nImage push succeeded: registry.local:9001/docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae\nImage docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae download succeeded by containerd\nImage download succeeded: gcr.io/kubernetes-helm/tiller:v2.16.1\n Image download failed: gcr.io/kubernetes-helm/tiller:v2.16.1404 Client Error: Not Found (\"No such image: gcr.io/kubernetes-helm/tiller:v2.16.1\")\nImage download succeeded: quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic\nImage push succeeded: registry.local:9001/quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic\nImage quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic download succeeded by containerd\nImage download succeeded: docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0\nImage push succeeded: registry.local:9001/docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0\nImage docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0 doImage is up to date for sha256:7fb3c2364b87e9241db7549bf11d42c129130f44e930d1ce36523fc693186e89\nImage is up to date for sha256:d4553944fbf7b50f20eece0ec3f638202fdbe2a1a597af8c3f3823201cc695b3\nwnload succeeded by containerd\nImage download succeeded: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1\nImage push succeeded: registry.local:9001/quay.io/stackanetes/kubernetes-entrypoint:v0.3.1\nImage quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 download succeeded by containerd\nImage download succeeded: quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\nImage push succeeded: registry.local:9001/quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\nImage quay.io/k8scsi/snapshot-controller:v2.0.0-rc2 download succeeded by containerd\n", "stdout_lines": ["Image is up to date for sha256:a595af0107f98768274e9143be61c7c80a8df2505ced520c9160f4e16ed42cd1", "Image is up to date for sha256:d1ccdd18e6ed8d91e3754e90c4b6cee42750ba165c75d3c78b4a31f057dd0423", "Image is up to date for sha256:6c9320041a7b5d00da54dda3a6bf9d6983b432ca5245a8254c83fc694d023810", "Image is up to date for sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c", "Image is up to date for sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f", "Image is up to date for sha256:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5", "Image is up to date for sha256:cb6799752c46cb16c0c5bebcb355e988a1cd1b3745ce8b67e004abe9a81340a8", "Image is up to date for sha256:fc05bc4225f39e81dcbd0035457977276ed8a6054e6bde6406e39b672e9724f5", "Image is up to date for sha256:53aa421faf0acd88f8a4cb113e9db2cc65b2a2954640ed96a56c4b94233674d8", "Image is up to date for sha256:98793d0a88c823c4fc0fb1b3833d12932be270fc4b6d62bc181f0f54413fe12d", "Image is up to date for sha256:7cf8e2d1b7337338a3f977d07abdf63d80d5005458dfab3ab8962c2bab99d40d", "Image is up to date for sha256:f2a1744e620d3bf673f8351dcfaa5334fe4888cfcd5476b0222499ccc1b158fe", "Image is up to date for sha256:a2bef2b25274b1acbdfde5e2f3de432475e15d4036b2108cea2dce968b0c29ea", "Image is up to date for sha256:3061a8a540ac0dee710c1b37edad7855581dc027a18890759c91792615505f13", "Image is up to date for sha256:fb95693fe5c67fc46893c1735d85f00f462dc7f5254f5ddbcbbe85fd4ef71717", "Image download succeeded: k8s.gcr.io/kube-apiserver:v1.18.1", "Image push succeeded: registry.local:9001/k8s.gcr.io/kube-apiserver:v1.18.1", "Image k8s.gcr.io/kube-apiserver:v1.18.1 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/kube-controller-manager:v1.18.1", "Image push succeeded: registry.local:9001/k8s.gcr.io/kube-controller-manager:v1.18.1", "Image k8s.gcr.io/kube-controller-manager:v1.18.1 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/kube-scheduler:v1.18.1", "Image push succeeded: registry.local:9001/k8s.gcr.io/kube-scheduler:v1.18.1", "Image k8s.gcr.io/kube-scheduler:v1.18.1 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/kube-proxy:v1.18.1", " Image download failed: k8s.gcr.io/kube-proxy:v1.18.1404 Client Error: Not Found (\"No such image: k8s.gcr.io/kube-proxy:v1.18.1\")", "Image download succeeded: k8s.gcr.io/pause:3.2", "Image push succeeded: registry.local:9001/k8s.gcr.io/pause:3.2", "Image k8s.gcr.io/pause:3.2 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/etcd:3.4.3-0", "Image push succeeded: registry.local:9001/k8s.gcr.io/etcd:3.4.3-0", "Image k8s.gcr.io/etcd:3.4.3-0 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/coredns:1.6.7", "Image push succeeded: registry.local:9001/k8s.gcr.io/coredns:1.6.7", "Image k8s.gcr.io/coredns:1.6.7 download succeeded by containerd", " Image download failed: quay.io/calico/cni:v3.12.0500 Server Error: Internal Server Error (\"Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\")", "Sleep 20s before retry downloading image quay.io/calico/cni:v3.12.0 ...", "Image download succeeded: quay.io/calico/cni:v3.12.0", "Image push succeeded: registry.local:9001/quay.io/calico/cni:v3.12.0", "Image quay.io/calico/cni:v3.12.0 download succeeded by containerd", "Image download succeeded: quay.io/calico/node:v3.12.0", "Image push succeeded: registry.local:9001/quay.io/calico/node:v3.12.0", "Image quay.io/calico/node:v3.12.0 download succeeded by containerd", "Image download succeeded: quay.io/calico/kube-controllers:v3.12.0", "Image push succeeded: registry.local:9001/quay.io/calico/kube-controllers:v3.12.0", "Image quay.io/calico/kube-controllers:v3.12.0 download succeeded by containerd", "Image download succeeded: quay.io/calico/pod2daemon-flexvol:v3.12.0", "Image push succeeded: registry.local:9001/quay.io/calico/pod2daemon-flexvol:v3.12.0", "Image quay.io/calico/pod2daemon-flexvol:v3.12.0 download succeeded by containerd", "Image download succeeded: docker.io/nfvpe/multus:v3.4", "Image push succeeded: registry.local:9001/docker.io/nfvpe/multus:v3.4", "Image docker.io/nfvpe/multus:v3.4 download succeeded by containerd", "Image download succeeded: docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8", "Image push succeeded: registry.local:9001/docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8", "Image docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8 download succeeded by containerd", "Image download succeeded: docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae", "Image push succeeded: registry.local:9001/docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae", "Image docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae download succeeded by containerd", "Image download succeeded: gcr.io/kubernetes-helm/tiller:v2.16.1", " Image download failed: gcr.io/kubernetes-helm/tiller:v2.16.1404 Client Error: Not Found (\"No such image: gcr.io/kubernetes-helm/tiller:v2.16.1\")", "Image download succeeded: quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic", "Image push succeeded: registry.local:9001/quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic", "Image quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic download succeeded by containerd", "Image download succeeded: docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0", "Image push succeeded: registry.local:9001/docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0", "Image docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0 doImage is up to date for sha256:7fb3c2364b87e9241db7549bf11d42c129130f44e930d1ce36523fc693186e89", "Image is up to date for sha256:d4553944fbf7b50f20eece0ec3f638202fdbe2a1a597af8c3f3823201cc695b3", "wnload succeeded by containerd", "Image download succeeded: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1", "Image push succeeded: registry.local:9001/quay.io/stackanetes/kubernetes-entrypoint:v0.3.1", "Image quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 download succeeded by containerd", "Image download succeeded: quay.io/k8scsi/snapshot-controller:v2.0.0-rc2", "Image push succeeded: registry.local:9001/quay.io/k8scsi/snapshot-controller:v2.0.0-rc2", "Image quay.io/k8scsi/snapshot-controller:v2.0.0-rc2 download succeeded by containerd"]} PLAY RECAP **************************************************************************************************************************************************** localhost : ok=187 changed=65 unreachable=0 failed=1 From lists at optimcloud.com Sun Jun 27 11:18:19 2021 From: lists at optimcloud.com (Embedded Devel) Date: Sun, 27 Jun 2021 11:18:19 +0000 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: <1624792215388.2966612007.1585434393@optimcloud.com> On Saturday 26 June 2021 20:17:36 PM (+07:00), Waines, Greg wrote: Agreed that increased docker fs is not documented well … especially if you need to increase the cgts_vg logical volume group in order to increase the docker filesystem size. We have plans to fix this. Yupp seems this is exactly what i need also right now, as im running into the system host-fs-modify controller-0 docker=60 HostFs update failed: Not enough free space on cgts-vg. Current free space 16 GiB, requested total increase 30 GiB Greg. From: open infra Sent: Friday, June 25, 2021 10:30 AM To: Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Finally managed to deploy OpenStack but not sure what caused the issue. Increased disk capacity for docker in worker and reviewed network. On Thu, Jun 24, 2021 at 1:59 PM open infra wrote: I managed to deploy stx-monitoring that require labelling only in controller nodes. Definitely something wrong with worker-0 labelling On Wed, Jun 23, 2021 at 8:52 PM open infra wrote: Here is more information about the issue. http://paste.openstack.org/show/806872/ Then I set the openstack-compute-node label to the controller-0 and re-apply stx-openstack (just to test). Then stx-openstack applying progress continued up to 55%. I can lock/unlcok the worker-0 via controller nodes. So, it should not be a problem with management network. On Mon, Jun 21, 2021 at 4:40 PM open infra wrote: thank you Bill and Thiago. Now I have switched to Release 5. Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack? for controllers: system host-label-assign $NODE openstack-control-plane=enabled For worker nodes: system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled Because these labels are not visible in the release 5 installation guide. On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill wrote: Hi again Danishka, we discussed this too. It was suggested that you check /var/logs/armada to see if there are any Armada startup logs that’d help understand what’s going on. Thanks, Bill... From: open infra Sent: Saturday, May 22, 2021 2:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I have deployed StarlingX R4 (bare metal dedicated storage installation). stx-openstack application applying was failed. When I retrieve openstack pods, I can see the status osh-openstack-garbd-garbd-7d4957d9f4-kz95v is pending. I have re-uploaded stx-openstack but the same results. I highly appreciate it if someone can help to fix resolve this matter as we have a demo next week. More details available here. describe pod osh-openstack-garbd-garbd http://paste.openstack.org/show/805587/ describe nodes http://paste.openstack.org/show/805589/ Regards, Danishka -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Linda.Wang at windriver.com Sun Jun 27 22:19:41 2021 From: Linda.Wang at windriver.com (Linda Wang) Date: Sun, 27 Jun 2021 15:19:41 -0700 Subject: [Starlingx-discuss] Bi-Weekly StarlingX OS Distro & Multi-OS Meeting Minutes: June 23, 2021 Message-ID: <5305abb9-7a28-1af9-ce51-7b3af47dd721@windriver.com> 06/23/2021 Attendees: Charles Short, Davlet Paench, Mark Asselstine, Scott Little, Frank Miller, Bill Zvonar, Bart Wensley, Steve Geary, Ramaswamy S., Jason Norton, 1. OS Distro (Mark) * Debian Build Development status o Pulp status--> moving to APTLY(_https://www.aptly.info/_) + Adding source support to pulp_deb is taking longer than expected + so pivot to using Aptly + both have REST API, and python3 client libraries + discussions around hosting Pupl on CENGN, can be put on hold due to this change + Revisit Pulp in the coming months as it offers advantages over Aptly that may still prove to be useful to have o 5.10 kernel + Underway forCentOS base + packaging is still RPM, it is work in progress for Debian transition + Debian 5.10 will not be small like CentOS kernel.  Will include patch history o The team is continuingworkon minikue, stx config, etc.. preparing for patch submission andreview o OS-Tree, also preparing for submitting the spec.  Working on writing the specstarting this week. o Goal for end of July, internal to WR, complete builds, and producing ISO repo's. + will have access to the build config, but will likely not be documented + will have tooling in place to allow access to config repo to do builds and produce diff tarballs o Q. What is Aptly? + https://www.aptly.info + allow one to create snapshot, distro, and releases + a tool to programmaticallysetup debian repo, and holds the debianartifacts, which becomes the input into OBS and LAT to generatepacakages andimagesrespectively * Python Status o not many updates o rebase is finished, aligned CentOS8 branch with master branch. o Will get back to Python 3. 2. Multi-OS * N/A * Currently interested in 5.10 kernel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Mon Jun 28 05:52:04 2021 From: openinfradn at gmail.com (open infra) Date: Mon, 28 Jun 2021 11:22:04 +0530 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: <1624792215388.2966612007.1585434393@optimcloud.com> References: <1624792215388.2966612007.1585434393@optimcloud.com> Message-ID: On Sun, Jun 27, 2021 at 4:48 PM Embedded Devel wrote: > > > On Saturday 26 June 2021 20:17:36 PM (+07:00), Waines, Greg wrote: > > Agreed that increased docker fs is not documented well … especially if you > need to increase the cgts_vg logical volume group in order to increase the > docker filesystem size. We have plans to fix this. > > > Yupp seems this is exactly what i need also right now, as im running into > the > > > system host-fs-modify controller-0 docker=60 > HostFs update failed: Not enough free space on cgts-vg. Current free space > 16 GiB, requested total increase 30 GiB > > > Please check if you have enough space (either disk or partitions) available and if so, you can extend the logical volume. I have used both methods (GUI and CLI). > > > > Greg. > > > > *From:* open infra > *Sent:* Friday, June 25, 2021 10:30 AM > *To:* Zvonar, Bill > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] stx-openstack application applying > failed > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Finally managed to deploy OpenStack but not sure what caused the issue. > > Increased disk capacity for docker in worker and reviewed network. > > > > On Thu, Jun 24, 2021 at 1:59 PM open infra wrote: > > I managed to deploy stx-monitoring that require labelling only in > controller nodes. > > Definitely something wrong with worker-0 labelling > > > > > > On Wed, Jun 23, 2021 at 8:52 PM open infra wrote: > > Here is more information about the issue. > > http://paste.openstack.org/show/806872/ > > > > > Then I set the openstack-compute-node label to the controller-0 and > re-apply stx-openstack (just to test). > > Then stx-openstack applying progress continued up to 55%. > > > > I can lock/unlcok the worker-0 via controller nodes. So, it should not be > a problem with management network. > > > > > > On Mon, Jun 21, 2021 at 4:40 PM open infra wrote: > > thank you Bill and Thiago. > > Now I have switched to Release 5. > > Don't we need to set following labels for release 5 deployment if we > supposed to deploy stx-openstack? > > > > for controllers: > > > > system host-label-assign $NODE openstack-control-plane=enabled > > > > For worker nodes: > > > > system host-label-assign $NODE openstack-compute-node=enabled > > system host-label-assign $NODE openvswitch=enabled > > > > Because these labels are not visible in the release 5 installation guide. > > > > On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill > wrote: > > Hi again Danishka, we discussed this too. > > > > It was suggested that you check /var/logs/armada to see if there are any > Armada startup logs that’d help understand what’s going on. > > > > Thanks, Bill... > > > > *From:* open infra > *Sent:* Saturday, May 22, 2021 2:28 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] stx-openstack application applying failed > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > > > I have deployed StarlingX R4 (bare metal dedicated storage installation). > > stx-openstack application applying was failed. > > > > When I retrieve openstack pods, I can see the status *osh-openstack-garbd-garbd-7d4957d9f4-kz95v > *is pending. > > I have re-uploaded stx-openstack but the same results. > > > > I highly appreciate it if someone can help to fix resolve this matter as > we have a demo next week. > > > > More details available here. > > describe pod osh-openstack-garbd-garbd > > http://paste.openstack.org/show/805587/ > > > describe nodes http://paste.openstack.org/show/805589/ > > > > > Regards, > > Danishka > > > -- > Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Mon Jun 28 07:41:36 2021 From: lists at optimcloud.com (Embedded Devel) Date: Mon, 28 Jun 2021 07:41:36 +0000 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: <1624865789816.2646660099.1051207438@optimcloud.com> On Monday 28 June 2021 12:52:04 PM (+07:00), open infra wrote: On Sun, Jun 27, 2021 at 4:48 PM Embedded Devel wrote: On Saturday 26 June 2021 20:17:36 PM (+07:00), Waines, Greg wrote: Agreed that increased docker fs is not documented well … especially if you need to increase the cgts_vg logical volume group in order to increase the docker filesystem size. We have plans to fix this. Yupp seems this is exactly what i need also right now, as im running into the system host-fs-modify controller-0 docker=60 HostFs update failed: Not enough free space on cgts-vg. Current free space 16 GiB, requested total increase 30 GiB Please check if you have enough space (either disk or partitions) available and if so, you can extend the logical volume. I have used both methods (GUI and CLI). yeah im aware of what to do, question is where the doc with the resize commands that i had to do the same for on the last cluster i built. [ 1.626293] sd 0:0:0:0: [sda] 976773168 512-byte logical blocks: (500 GB/466 GiB) controller-0:~# pvdisplay --- Physical volume --- PV Name /dev/sda6 VG Name nova-local PV Size 34.00 GiB / not usable 4.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 8703 Free PE 0 Allocated PE 8703 PV UUID H4XSXd-6nbV-UVML-5UzC-CBRW-Nibt-XcqqYh --- Physical volume --- PV Name /dev/sda5 VG Name cgts-vg PV Size 179.00 GiB / not usable 32.00 MiB Allocatable yes PE Size 32.00 MiB Total PE 5727 Free PE 517 Allocated PE 5210 PV UUID EuOPqF-b3vF-L8cu-ULB5-40Q0-g0W9-07N5lW controller-0:~# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 16G 0 16G 0% /dev tmpfs 16G 212K 16G 1% /dev/shm tmpfs 16G 13M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/sda4 20G 8.0G 11G 44% / tmpfs 1.0G 112K 1.0G 1% /tmp /dev/mapper/cgts--vg-backup--lv 25G 45M 24G 1% /opt/backups /dev/mapper/cgts--vg-scratch--lv 16G 69M 15G 1% /scratch /dev/mapper/cgts--vg-ceph--mon--lv 20G 95M 19G 1% /var/lib/ceph/mon /dev/sda1 9.5G 37M 9.0G 1% /opt/platform-backup /dev/sda3 477M 106M 343M 24% /boot /dev/mapper/cgts--vg-docker--lv 30G 22G 8.6G 72% /var/lib/docker /dev/sda2 300M 11M 290M 4% /boot/efi /dev/mapper/cgts--vg-kubelet--lv 9.8G 38M 9.2G 1% /var/lib/kubelet /dev/mapper/cgts--vg-log--lv 7.6G 230M 7.0G 4% /var/log -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Mon Jun 28 12:06:17 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Mon, 28 Jun 2021 12:06:17 +0000 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: <1624792215388.2966612007.1585434393@optimcloud.com> References: <1624792215388.2966612007.1585434393@optimcloud.com> Message-ID: Here is what you need to do … we will update this in docs. Greg. # Increase size of cgts-vg LVG in order to increase size of docker fs export NODE=controller-1 ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') NEW_SIZE=35 NEW_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NEW_SIZE}) NEW_PARTITION_UUID=$(echo ${NEW_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') system host-pv-add ${NODE} cgts-vg ${NEW_PARTITION_UUID} system host-fs-modify controller-1 docker=60 From: Embedded Devel Sent: Sunday, June 27, 2021 7:18 AM To: Waines, Greg ; open infra ; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] On Saturday 26 June 2021 20:17:36 PM (+07:00), Waines, Greg wrote: Agreed that increased docker fs is not documented well … especially if you need to increase the cgts_vg logical volume group in order to increase the docker filesystem size. We have plans to fix this. Yupp seems this is exactly what i need also right now, as im running into the system host-fs-modify controller-0 docker=60 HostFs update failed: Not enough free space on cgts-vg. Current free space 16 GiB, requested total increase 30 GiB Greg. From: open infra > Sent: Friday, June 25, 2021 10:30 AM To: Zvonar, Bill > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Finally managed to deploy OpenStack but not sure what caused the issue. Increased disk capacity for docker in worker and reviewed network. On Thu, Jun 24, 2021 at 1:59 PM open infra > wrote: I managed to deploy stx-monitoring that require labelling only in controller nodes. Definitely something wrong with worker-0 labelling On Wed, Jun 23, 2021 at 8:52 PM open infra > wrote: Here is more information about the issue. http://paste.openstack.org/show/806872/ Then I set the openstack-compute-node label to the controller-0 and re-apply stx-openstack (just to test). Then stx-openstack applying progress continued up to 55%. I can lock/unlcok the worker-0 via controller nodes. So, it should not be a problem with management network. On Mon, Jun 21, 2021 at 4:40 PM open infra > wrote: thank you Bill and Thiago. Now I have switched to Release 5. Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack? for controllers: system host-label-assign $NODE openstack-control-plane=enabled For worker nodes: system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled Because these labels are not visible in the release 5 installation guide. On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill > wrote: Hi again Danishka, we discussed this too. It was suggested that you check /var/logs/armada to see if there are any Armada startup logs that’d help understand what’s going on. Thanks, Bill... From: open infra > Sent: Saturday, May 22, 2021 2:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I have deployed StarlingX R4 (bare metal dedicated storage installation). stx-openstack application applying was failed. When I retrieve openstack pods, I can see the status osh-openstack-garbd-garbd-7d4957d9f4-kz95v is pending. I have re-uploaded stx-openstack but the same results. I highly appreciate it if someone can help to fix resolve this matter as we have a demo next week. More details available here. describe pod osh-openstack-garbd-garbd http://paste.openstack.org/show/805587/ describe nodes http://paste.openstack.org/show/805589/ Regards, Danishka -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Mon Jun 28 12:14:52 2021 From: lists at optimcloud.com (Embedded Devel) Date: Mon, 28 Jun 2021 12:14:52 +0000 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: References: Message-ID: <1624882444992.3028693704.3554689924@optimcloud.com> On Monday 28 June 2021 19:06:17 PM (+07:00), Waines, Greg wrote: Here is what you need to do … we will update this in docs. Greg. # Increase size of cgts-vg LVG in order to increase size of docker fs export NODE=controller-1 ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') NEW_SIZE=35 NEW_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NEW_SIZE}) NEW_PARTITION_UUID=$(echo ${NEW_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') system host-pv-add ${NODE} cgts-vg ${NEW_PARTITION_UUID} system host-fs-modify controller-1 docker=60 ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') fails on my stx 5.0 simplex [sysadmin at controller-0 ~(keystone_admin)]$ system host-show ${NODE} +-----------------------+----------------------------------------------------------------------+ | Property | Value | +-----------------------+----------------------------------------------------------------------+ | action | none | | administrative | unlocked | | availability | available | | bm_ip | None | | bm_type | none | | bm_username | None | | boot_device | /dev/disk/by-path/pci-0000:00:17.0-ata-1.0 | | capabilities | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} | | clock_synchronization | ntp | | config_applied | 65c1c5ac-546a-45fc-8d82-e9644f1930a2 | | config_status | None | | config_target | 65c1c5ac-546a-45fc-8d82-e9644f1930a2 | | console | tty0 | | created_at | 2021-06-26T10:51:25.595104+00:00 | | device_image_update | None | | hostname | controller-0 | | id | 1 | | install_output | text | | install_state | None | | install_state_info | None | | inv_state | inventoried | | invprovision | provisioned | | location | {} | | mgmt_ip | 192.168.204.2 | | mgmt_mac | 00:00:00:00:00:00 | | operational | enabled | | personality | controller | | reboot_needed | False | | reserved | False | | rootfs_device | /dev/disk/by-path/pci-0000:00:17.0-ata-1.0 | | serialid | None | | software_load | 21.05 | | subfunction_avail | available | | subfunction_oper | enabled | | subfunctions | controller,worker | | task | | | tboot | false | | ttys_dcd | None | | updated_at | 2021-06-28T12:13:20.119581+00:00 | | uptime | 15240 | | uuid | 2b237d4f-fc3d-4f83-bdf2-b2689469b89e | | vim_progress_status | services-enabled | +-----------------------+----------------------------------------------------------------------+ From: Embedded Devel Sent: Sunday, June 27, 2021 7:18 AM To: Waines, Greg ; open infra ; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] On Saturday 26 June 2021 20:17:36 PM (+07:00), Waines, Greg wrote: Agreed that increased docker fs is not documented well … especially if you need to increase the cgts_vg logical volume group in order to increase the docker filesystem size. We have plans to fix this. Yupp seems this is exactly what i need also right now, as im running into the system host-fs-modify controller-0 docker=60 HostFs update failed: Not enough free space on cgts-vg. Current free space 16 GiB, requested total increase 30 GiB Greg. From: open infra Sent: Friday, June 25, 2021 10:30 AM To: Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Finally managed to deploy OpenStack but not sure what caused the issue. Increased disk capacity for docker in worker and reviewed network. On Thu, Jun 24, 2021 at 1:59 PM open infra wrote: I managed to deploy stx-monitoring that require labelling only in controller nodes. Definitely something wrong with worker-0 labelling On Wed, Jun 23, 2021 at 8:52 PM open infra wrote: Here is more information about the issue. http://paste.openstack.org/show/806872/ Then I set the openstack-compute-node label to the controller-0 and re-apply stx-openstack (just to test). Then stx-openstack applying progress continued up to 55%. I can lock/unlcok the worker-0 via controller nodes. So, it should not be a problem with management network. On Mon, Jun 21, 2021 at 4:40 PM open infra wrote: thank you Bill and Thiago. Now I have switched to Release 5. Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack? for controllers: system host-label-assign $NODE openstack-control-plane=enabled For worker nodes: system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled Because these labels are not visible in the release 5 installation guide. On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill wrote: Hi again Danishka, we discussed this too. It was suggested that you check /var/logs/armada to see if there are any Armada startup logs that’d help understand what’s going on. Thanks, Bill... From: open infra Sent: Saturday, May 22, 2021 2:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I have deployed StarlingX R4 (bare metal dedicated storage installation). stx-openstack application applying was failed. When I retrieve openstack pods, I can see the status osh-openstack-garbd-garbd-7d4957d9f4-kz95v is pending. I have re-uploaded stx-openstack but the same results. I highly appreciate it if someone can help to fix resolve this matter as we have a demo next week. More details available here. describe pod osh-openstack-garbd-garbd http://paste.openstack.org/show/805587/ describe nodes http://paste.openstack.org/show/805589/ Regards, Danishka -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Jun 28 14:51:30 2021 From: scott.little at windriver.com (Scott Little) Date: Mon, 28 Jun 2021 10:51:30 -0400 Subject: [Starlingx-discuss] Slow repo sync times and the yocto kernel In-Reply-To: <1d1828bf-2aa4-f100-42ee-3d11d14f5af8@windriver.com> References: <1d1828bf-2aa4-f100-42ee-3d11d14f5af8@windriver.com> Message-ID: The manifest change to implement option 2 has been merged.  However a lot of folks are now having 'repo sync' issues with linux-yocto.  Strangely it doesn't hit the first repo sync, but all subsequent repo sync's are affected. If you are seeing an error from repo sync, please use this work around ... repo init ... rm -rf .repo/project-objects/linux-yocto.git* .repo/projects/cgcs-root/stx/git/linux-yocto-* repo sync ... I've reported the bug upsteam... https://bugs.chromium.org/p/gerrit/issues/detail?id=14700&q=tyranscooter&can=2 ... if you have a gmail account and are willing to do so, add a star to increase visibility of this strange error. I'll also be looking for a better fix on our end. Scott On 2021-06-21 4:10 p.m., Scott Little wrote: > Hi all > > The yocto kernel git was added to the StarlingX manifests late last > week.  Since then I've heard a lot of grumbling about slow 'repo sync' > times.  It affects folks setting up a new distro or monolithic > workspace for the first time.  The repo-sync time can exceed an hour > as the entire history of the linux kernel is downloaded.  You will > also notice an additional 5.5 GB of storage consumed to hold all this > history.  Subsequent repo sync's should be fast. > > So the question is... what if anything do we do about it? > > Our options... > > 1) Leave it as is. > > Hope that folks are mostly working in the 'flock' or 'container' > layers, and NOT using monolithic builds, and so the number of folk > impacted is low.   Folk working at the distro layer or using > monolithic can work on something else, or go to lunch, while they wait > for the initial repo sync to complete. > > > 2) Try to minimize the amount of kernel history we download through a > manifest change.  Limiting the git history depth does the trick ... > >    >   clone-depth="100" upstream="v5.10/standard/intel-x86" > revision="refs/tags/v5.10.30" name="linux-yocto" > path="cgcs-root/stx/git/linux-yocto-std"/> >   clone-depth="100" upstream="v5.10/standard/preempt-rt/intel-x86" > revision="2112f10d3d0b558c9ece3ab562c41b7f6d89cff4" > name="linux-yocto.git" path="cgcs-root/stx/git/linux-yocto-rt"/> > > The good ... > >  - repo sync time drops from ~1 hr to ~5 min > >  - storage drops from ~5.5 GB to ~ 5GB > > The bad ... > > - This is fragile.  It assumes that the desired rt sha can be reached > from 100 commits from head of branch.  However, the connection to the > upstream git server drops if we ask for much more than that. e.g. > depth=500 is a guaranteed fail.  So upstream adds a few patches and we > might start failing our repo-sync. > > - The history is incomplete, this may hinder kernel developers. A 'git > fetch linux-yocto' should pull in the rest of the history, so probably > not a blocker. > > > 3) We could double the number of manifests at each layer. One would > only pull in the minimal kernel history, and other the full history. > > > 4) Create a mirror an a larger git server, like github, and hope that > significantly improves the download speed. > > > 5) Download a tarball of the yocto kernel, rather than pulling in it's > git tree.  Yocto's git server doesn't seem to be set up to serve > custom tarballs based on a requested sha.  We would have to set it all > up manually, and it's not remotely convenient to kernel developers. > > > Of these options, I'm leaning toward option 2, but look forward to > hearing from the community. > > > Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Mon Jun 28 14:54:11 2021 From: scott.little at windriver.com (Scott Little) Date: Mon, 28 Jun 2021 10:54:11 -0400 Subject: [Starlingx-discuss] Slow repo sync times and the yocto kernel In-Reply-To: References: <1d1828bf-2aa4-f100-42ee-3d11d14f5af8@windriver.com> Message-ID: <15da574c-7b55-3f2c-0776-a0c7b14494f8@windriver.com> I have posted a better fix for review ... https://review.opendev.org/c/starlingx/manifest/+/798340 Scott On 2021-06-28 10:51 a.m., Scott Little wrote: > The manifest change to implement option 2 has been merged.  However a > lot of folks are now having 'repo sync' issues with linux-yocto. > Strangely it doesn't hit the first repo sync, but all subsequent repo > sync's are affected. > > If you are seeing an error from repo sync, please use this work around > ... > > repo init ... > rm -rf .repo/project-objects/linux-yocto.git* > .repo/projects/cgcs-root/stx/git/linux-yocto-* > repo sync ... > > I've reported the bug upsteam... > https://bugs.chromium.org/p/gerrit/issues/detail?id=14700&q=tyranscooter&can=2 > ... if you have a gmail account and are willing to do so, add a star > to increase visibility of this strange error. > > I'll also be looking for a better fix on our end. > > Scott > > > > On 2021-06-21 4:10 p.m., Scott Little wrote: >> Hi all >> >> The yocto kernel git was added to the StarlingX manifests late last >> week.  Since then I've heard a lot of grumbling about slow 'repo >> sync' times.  It affects folks setting up a new distro or monolithic >> workspace for the first time.  The repo-sync time can exceed an hour >> as the entire history of the linux kernel is downloaded.  You will >> also notice an additional 5.5 GB of storage consumed to hold all this >> history.  Subsequent repo sync's should be fast. >> >> So the question is... what if anything do we do about it? >> >> Our options... >> >> 1) Leave it as is. >> >> Hope that folks are mostly working in the 'flock' or 'container' >> layers, and NOT using monolithic builds, and so the number of folk >> impacted is low.   Folk working at the distro layer or using >> monolithic can work on something else, or go to lunch, while they >> wait for the initial repo sync to complete. >> >> >> 2) Try to minimize the amount of kernel history we download through a >> manifest change.  Limiting the git history depth does the trick ... >> >>    >>   > clone-depth="100" upstream="v5.10/standard/intel-x86" >> revision="refs/tags/v5.10.30" name="linux-yocto" >> path="cgcs-root/stx/git/linux-yocto-std"/> >>   > clone-depth="100" upstream="v5.10/standard/preempt-rt/intel-x86" >> revision="2112f10d3d0b558c9ece3ab562c41b7f6d89cff4" >> name="linux-yocto.git" path="cgcs-root/stx/git/linux-yocto-rt"/> >> >> The good ... >> >>  - repo sync time drops from ~1 hr to ~5 min >> >>  - storage drops from ~5.5 GB to ~ 5GB >> >> The bad ... >> >> - This is fragile.  It assumes that the desired rt sha can be reached >> from 100 commits from head of branch.  However, the connection to the >> upstream git server drops if we ask for much more than that. e.g. >> depth=500 is a guaranteed fail.  So upstream adds a few patches and >> we might start failing our repo-sync. >> >> - The history is incomplete, this may hinder kernel developers. A >> 'git fetch linux-yocto' should pull in the rest of the history, so >> probably not a blocker. >> >> >> 3) We could double the number of manifests at each layer. One would >> only pull in the minimal kernel history, and other the full history. >> >> >> 4) Create a mirror an a larger git server, like github, and hope that >> significantly improves the download speed. >> >> >> 5) Download a tarball of the yocto kernel, rather than pulling in >> it's git tree.  Yocto's git server doesn't seem to be set up to serve >> custom tarballs based on a requested sha.  We would have to set it >> all up manually, and it's not remotely convenient to kernel developers. >> >> >> Of these options, I'm leaning toward option 2, but look forward to >> hearing from the community. >> >> >> Scott >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From lists at optimcloud.com Mon Jun 28 16:12:46 2021 From: lists at optimcloud.com (Embedded Devel) Date: Mon, 28 Jun 2021 16:12:46 +0000 Subject: [Starlingx-discuss] fails to deploy openstack Message-ID: <1624896529116.2637235973.4033276379@optimcloud.com> stx simplex 5.0 bare metal fails to deploy openstack name | stx-openstack | progress | operation aborted, check logs for detail | kubectl get pods -n openstack openstack ingress-7754d468d-t9wvh 1/1 Running 0 35m openstack ingress-error-pages-75dd8b57d8-d6rhk 1/1 Running 0 35m openstack mariadb-ingress-6b9f6964f5-7l77l 0/1 Running 0 35m openstack mariadb-ingress-error-pages-86c79d7dd4-6nzpw 1/1 Running 0 35m openstack mariadb-server-0 0/1 Pending 0 35m logs say cat /var/log/armada/stx-openstack-apply_2021-06-28-14-40-43.log 2021-06-28 14:40:46.085 68 DEBUG armada.handlers.document [-] Resolving reference /tmp/manifests/stx-openstack/1.0-83-centos-stable-versioned/stx-openstack-stx-openstack.yaml. resolve_reference /usr/local/lib/python3.6/dist-packages/armada/handlers/document.py:49 2021-06-28 14:40:46.369 68 DEBUG armada.handlers.tiller [-] Using Tiller host IP: 127.0.0.1 _get_tiller_ip /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 2021-06-28 14:40:46.370 68 DEBUG armada.handlers.tiller [-] Using Tiller host port: 24134 _get_tiller_port /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:174 2021-06-28 14:40:46.370 68 DEBUG armada.handlers.tiller [-] Tiller getting gRPC insecure channel at 127.0.0.1:24134 with options: [grpc.max_send_message_length=429496729, grpc.max_receive_message_length=429496729] get_channel /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:127 2021-06-28 14:40:46.375 68 DEBUG armada.handlers.tiller [-] Armada is using Tiller at: 127.0.0.1:24134, namespace=kube-system, timeout=300 __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:107 2021-06-28 14:40:46.375 68 INFO armada.handlers.lock [-] Acquiring lock 2021-06-28 14:40:46.502 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-psp-rolebinding validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.502 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] helm-toolkit validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.503 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ingress validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.503 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nginx-ports-control validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.504 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-mariadb validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.504 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-garbd validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.505 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-memcached validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.505 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-rabbitmq validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.506 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-keystone validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.506 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-keystone-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.507 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-barbican validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.507 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ceph-rgw validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.508 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-glance validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.508 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-cinder validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.509 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-libvirt validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.509 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-openvswitch validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.510 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nova validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.510 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-placement validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.511 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nova-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.511 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-neutron validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.512 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ironic validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.512 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-heat validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.513 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-aodh validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.513 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-gnocchi validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.514 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-panko validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.514 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ceilometer validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.515 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-fm-rest-api validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.515 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-horizon validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.516 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-dcdbsync validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.516 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-ingress validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-mariadb validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-memcached validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-rabbitmq validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-keystone validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-keystone-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-barbican validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-glance validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-ceph-rgw validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-cinder validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-compute-kit validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-heat validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-fm-rest-api validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-horizon validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-telemetry validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-dcdbsync validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.521 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-psp-rolebinding validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.521 68 DEBUG armada.utils.validate [-] Validating document [armada/Manifest/v1] armada-manifest validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-psp-rolebinding validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] helm-toolkit validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ingress validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.527 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nginx-ports-control validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.527 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-mariadb validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.528 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-garbd validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.528 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-memcached validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.529 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-rabbitmq validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.529 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-keystone validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.530 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-keystone-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.530 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-barbican validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.531 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ceph-rgw validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.531 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-glance validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.532 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-cinder validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.532 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-libvirt validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.533 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-openvswitch validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.533 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nova validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.534 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-placement validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.534 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nova-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.535 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-neutron validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.535 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ironic validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.536 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-heat validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.536 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-aodh validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.537 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-gnocchi validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.537 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-panko validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.538 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ceilometer validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.538 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-fm-rest-api validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.539 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-horizon validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.539 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-dcdbsync validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-ingress validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-mariadb validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-memcached validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-rabbitmq validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-keystone validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-keystone-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-barbican validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-glance validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-ceph-rgw validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-cinder validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-compute-kit validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-heat validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-fm-rest-api validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-horizon validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-telemetry validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-dcdbsync validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-psp-rolebinding validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating document [armada/Manifest/v1] armada-manifest validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.553 68 INFO armada.handlers.armada [-] Performing pre-flight operations. 2021-06-28 14:40:46.553 68 DEBUG armada.handlers.tiller [-] Using Tiller host IP: 127.0.0.1 _get_tiller_ip /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 2021-06-28 14:40:46.553 68 DEBUG armada.handlers.tiller [-] Getting Tiller Status: Tiller exists tiller_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:186 2021-06-28 14:40:46.553 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/psp-rolebinding-0.1.0.tgz 2021-06-28 14:40:46.553 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.557 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/ingress-0.1.0.tgz 2021-06-28 14:40:46.557 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.570 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/helm-toolkit-0.1.0.tgz 2021-06-28 14:40:46.570 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.581 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/nginx-ports-control-0.1.0.tgz 2021-06-28 14:40:46.581 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.584 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/mariadb-0.1.0.tgz 2021-06-28 14:40:46.584 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.600 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/memcached-0.1.0.tgz 2021-06-28 14:40:46.600 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.612 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/rabbitmq-0.1.0.tgz 2021-06-28 14:40:46.612 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.626 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/keystone-0.1.0.tgz 2021-06-28 14:40:46.626 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.641 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/glance-0.1.0.tgz 2021-06-28 14:40:46.641 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.658 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/cinder-0.1.0.tgz 2021-06-28 14:40:46.658 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.674 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/libvirt-0.1.0.tgz 2021-06-28 14:40:46.674 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.688 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/openvswitch-0.1.0.tgz 2021-06-28 14:40:46.688 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.702 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/nova-0.1.0.tgz 2021-06-28 14:40:46.703 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.727 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/nova-api-proxy-0.1.0.tgz 2021-06-28 14:40:46.727 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.739 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/neutron-0.1.0.tgz 2021-06-28 14:40:46.739 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.755 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/placement-0.1.0.tgz 2021-06-28 14:40:46.755 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.768 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/heat-0.1.0.tgz 2021-06-28 14:40:46.768 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.783 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/fm-rest-api-0.1.0.tgz 2021-06-28 14:40:46.783 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.796 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/horizon-0.1.0.tgz 2021-06-28 14:40:46.796 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.810 68 DEBUG armada.handlers.tiller [-] Tiller ListReleases() with timeout=300, request=limit: 32 status_codes: UNKNOWN status_codes: DEPLOYED status_codes: DELETED status_codes: DELETING status_codes: FAILED status_codes: PENDING_INSTALL status_codes: PENDING_UPGRADE status_codes: PENDING_ROLLBACK get_results /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:215 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-mariadb, version 1, status: FAILED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-ingress, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-nginx-ports-control, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-psp-rolebinding, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release stx-cephfs-provisioner, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release stx-ceph-pools-audit, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release stx-rbd-provisioner, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release cm-cert-manager-psp-rolebinding, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release cm-cert-manager, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release ic-nginx-ingress, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.833 68 INFO armada.handlers.armada [-] Processing ChartGroup: openstack-psp-rolebinding (Deploy psp rolebinding), sequenced=True 2021-06-28 14:40:46.833 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-psp-rolebinding]: Processing Chart, release=osh-openstack-psp-rolebinding 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.wait [-] [chart=openstack-psp-rolebinding]: Resolved `wait.resources` list: [] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-28 14:40:46.834 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-psp-rolebinding]: Existing release osh-openstack-psp-rolebinding found in namespace openstack 2021-06-28 14:40:46.834 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-psp-rolebinding]: Checking for updates to chart release inputs. 2021-06-28 14:40:46.835 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-psp-rolebinding]: Found no updates to chart release inputs 2021-06-28 14:40:46.835 68 INFO armada.handlers.armada [-] All Charts applied in ChartGroup openstack-psp-rolebinding. 2021-06-28 14:40:46.835 68 INFO armada.handlers.armada [-] Processing ChartGroup: openstack-ingress (OpenStack Ingress Controller), sequenced=False 2021-06-28 14:40:46.836 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-ingress]: Processing Chart, release=osh-openstack-ingress 2021-06-28 14:40:46.836 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-nginx-ports-control]: Processing Chart, release=osh-openstack-nginx-ports-control 2021-06-28 14:40:46.836 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Resolved `wait.resources` list: [{'type': 'job', 'required': False, 'labels': {'release_group': 'osh-openstack-ingress'}}, {'type': 'pod', 'labels': {'release_group': 'osh-openstack-ingress'}}] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-28 14:40:46.836 68 DEBUG armada.handlers.wait [-] [chart=openstack-nginx-ports-control]: Resolved `wait.resources` list: [] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-28 14:40:46.836 68 INFO armada.handlers.chartbuilder [-] [chart=openstack-ingress]: Building dependency chart helm-toolkit for release openstack-ingress. 2021-06-28 14:40:46.838 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-nginx-ports-control]: Existing release osh-openstack-nginx-ports-control found in namespace openstack 2021-06-28 14:40:46.839 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-nginx-ports-control]: Checking for updates to chart release inputs. 2021-06-28 14:40:46.841 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-nginx-ports-control]: Found no updates to chart release inputs 2021-06-28 14:40:46.847 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-ingress]: Existing release osh-openstack-ingress found in namespace openstack 2021-06-28 14:40:46.850 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-ingress]: Checking for updates to chart release inputs. 2021-06-28 14:40:46.903 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-ingress]: Found no updates to chart release inputs 2021-06-28 14:40:46.903 68 INFO armada.handlers.wait [-] [chart=openstack-ingress]: Waiting for resource type=job, namespace=openstack labels=release_group=osh-openstack-ingress required=False for 1800s 2021-06-28 14:40:46.903 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Starting to wait on: namespace=openstack, resource type=job, label_selector=(release_group=osh-openstack-ingress), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-28 14:40:46.907 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Skipping non-required wait, no job resources found. _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:386 2021-06-28 14:40:46.908 68 INFO armada.handlers.wait [-] [chart=openstack-ingress]: Waiting for resource type=pod, namespace=openstack labels=release_group=osh-openstack-ingress required=True for 1800s 2021-06-28 14:40:46.908 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Starting to wait on: namespace=openstack, resource type=pod, label_selector=(release_group=osh-openstack-ingress), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: pod ingress-7754d468d-d7dz4 is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: pod ingress-error-pages-75dd8b57d8-hbdfb is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Found no modified resources. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:302 2021-06-28 14:40:46.919 68 INFO armada.handlers.armada [-] All Charts applied in ChartGroup openstack-ingress. 2021-06-28 14:40:46.919 68 INFO armada.handlers.armada [-] Processing ChartGroup: openstack-mariadb (Mariadb), sequenced=True 2021-06-28 14:40:46.919 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-mariadb]: Processing Chart, release=osh-openstack-mariadb 2021-06-28 14:40:46.920 68 DEBUG armada.handlers.wait [-] [chart=openstack-mariadb]: Resolved `wait.resources` list: [{'type': 'job', 'required': False, 'labels': {'release_group': 'osh-openstack-mariadb'}}, {'type': 'pod', 'labels': {'release_group': 'osh-openstack-mariadb'}}] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-28 14:40:46.920 68 INFO armada.handlers.chartbuilder [-] [chart=openstack-mariadb]: Building dependency chart helm-toolkit for release openstack-mariadb. 2021-06-28 14:40:46.928 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-mariadb]: Purging release osh-openstack-mariadb with status FAILED 2021-06-28 14:40:46.928 68 INFO armada.handlers.tiller [-] [chart=openstack-mariadb]: Delete osh-openstack-mariadb release with disable_hooks=False, purge=True, timeout=300 flags 2021-06-28 14:40:47.384 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-mariadb]: Installing release osh-openstack-mariadb in namespace openstack, wait=True, timeout=1800s 2021-06-28 14:40:47.387 68 INFO armada.handlers.tiller [-] [chart=openstack-mariadb]: Helm install release: wait=True, timeout=1800 2021-06-28 14:41:46.445 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-28 14:42:46.508 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 [sysadmin at controller-0 ~(keystone_admin)]$ -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com From ashlee at openstack.org Mon Jun 28 16:46:49 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Mon, 28 Jun 2021 11:46:49 -0500 Subject: [Starlingx-discuss] OpenInfra Live - July 1 at 9am CT Message-ID: Hi everyone, This week’s OpenInfra Live episode is brought to you by the Open Infrastructure Foundation. The GSMA expects there will be over 24 billion edge connections by 2025, 20% of them on 5G. The intelligent edge is the analysis of data and development of solutions at the site where the data is generated. A recent survey, conducted by Wind River and supported by the OpenInfra Foundation, highlights some key trends around the intelligent edge and how open source plays a critical role. Join Ildiko Vancsa as she hosts Mark Collier and Paul Miller to learn about the results of the survey and insight into how upstream communities are making progress for this emerging use case. Episode: Building the Intelligent Edge with Open Source Technologies Date and time: July 1 at 9am CT (1400 UTC) You can watch us live on: YouTube: https://www.youtube.com/watch?v=pHHCIGkpNzs LinkedIn: https://www.linkedin.com/feed/update/urn:li:ugcPost:6813893379852763136/ Facebook: https://www.facebook.com/104139126308032/posts/4096084423780129/ WeChat: recording will be posted on OpenStack WeChat after the live stream Speakers: Paul Miller - Wind River Mark Collier - Open Infrastructure Foundation Ildiko Vancsa - Open Infrastructure Foundation Have an idea for a future episode? Share it now at ideas.openinfra.live . Thanks, Ashlee -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.pereira at windriver.com Mon Jun 28 17:09:48 2021 From: douglas.pereira at windriver.com (Douglas Lopes Pereira) Date: Mon, 28 Jun 2021 14:09:48 -0300 Subject: [Starlingx-discuss] fails to deploy openstack In-Reply-To: <1624896529116.2637235973.4033276379@optimcloud.com> References: <1624896529116.2637235973.4033276379@optimcloud.com> Message-ID: Hi, we would need to check why the mariadb-server-0 pod is in a pending state. What is the output for the following command? kubectl describe pod mariadb-server-0 -n openstack Regards, Doug On Mon, Jun 28, 2021 at 1:12 PM Embedded Devel wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > stx simplex 5.0 bare metal fails to deploy openstack > > name | stx-openstack | > progress | operation aborted, check logs for detail | > > kubectl get pods -n openstack > > openstack ingress-7754d468d-t9wvh 1/1 > Running 0 35m > openstack ingress-error-pages-75dd8b57d8-d6rhk 1/1 > Running 0 35m > openstack mariadb-ingress-6b9f6964f5-7l77l 0/1 > Running 0 35m > openstack mariadb-ingress-error-pages-86c79d7dd4-6nzpw 1/1 > Running 0 35m > openstack mariadb-server-0 0/1 > Pending 0 35m > > > logs say > cat /var/log/armada/stx-openstack-apply_2021-06-28-14-40-43.log > 2021-06-28 14:40:46.085 68 DEBUG armada.handlers.document [-] Resolving > reference > > /tmp/manifests/stx-openstack/1.0-83-centos-stable-versioned/stx-openstack-stx-openstack.yaml. > resolve_reference > /usr/local/lib/python3.6/dist-packages/armada/handlers/document.py:49 > 2021-06-28 14:40:46.369 68 DEBUG armada.handlers.tiller [-] Using Tiller > host IP: 127.0.0.1 _get_tiller_ip > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 > 2021-06-28 14:40:46.370 68 DEBUG armada.handlers.tiller [-] Using Tiller > host port: 24134 _get_tiller_port > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:174 > 2021-06-28 14:40:46.370 68 DEBUG armada.handlers.tiller [-] Tiller getting > gRPC insecure channel at 127.0.0.1:24134 with options: > [grpc.max_send_message_length=429496729, > grpc.max_receive_message_length=429496729] get_channel > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:127 > 2021-06-28 14:40:46.375 68 DEBUG armada.handlers.tiller [-] Armada is using > Tiller at: 127.0.0.1:24134, namespace=kube-system, timeout=300 __init__ > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:107 > 2021-06-28 14:40:46.375 68 INFO armada.handlers.lock [-] Acquiring lock > 2021-06-28 14:40:46.502 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-psp-rolebinding > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.502 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] helm-toolkit validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.503 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-ingress validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.503 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-nginx-ports-control > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.504 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-mariadb validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.504 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-garbd validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.505 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-memcached validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.505 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-rabbitmq validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.506 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-keystone validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.506 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-keystone-api-proxy > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.507 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-barbican validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.507 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-ceph-rgw validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.508 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-glance validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.508 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-cinder validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.509 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-libvirt validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.509 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-openvswitch validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.510 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-nova validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.510 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-placement validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.511 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-nova-api-proxy > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.511 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-neutron validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.512 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-ironic validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.512 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-heat validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.513 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-aodh validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.513 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-gnocchi validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.514 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-panko validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.514 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-ceilometer validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.515 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-fm-rest-api validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.515 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-horizon validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.516 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-dcdbsync validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.516 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-ingress validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-mariadb validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-memcached > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-rabbitmq validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-keystone validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-keystone-api-proxy > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-barbican validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-glance validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-ceph-rgw validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-cinder validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-compute-kit > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-heat validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-fm-rest-api > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-horizon validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-telemetry > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-dcdbsync validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.521 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-psp-rolebinding > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.521 68 DEBUG armada.utils.validate [-] Validating > document [armada/Manifest/v1] armada-manifest validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-psp-rolebinding > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] helm-toolkit validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-ingress validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.527 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-nginx-ports-control > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.527 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-mariadb validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.528 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-garbd validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.528 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-memcached validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.529 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-rabbitmq validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.529 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-keystone validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.530 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-keystone-api-proxy > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.530 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-barbican validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.531 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-ceph-rgw validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.531 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-glance validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.532 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-cinder validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.532 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-libvirt validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.533 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-openvswitch validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.533 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-nova validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.534 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-placement validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.534 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-nova-api-proxy > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.535 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-neutron validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.535 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-ironic validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.536 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-heat validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.536 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-aodh validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.537 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-gnocchi validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.537 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-panko validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.538 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-ceilometer validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.538 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-fm-rest-api validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.539 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-horizon validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.539 68 DEBUG armada.utils.validate [-] Validating > document [armada/Chart/v1] openstack-dcdbsync validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-ingress validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-mariadb validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-memcached > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-rabbitmq validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-keystone validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-keystone-api-proxy > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-barbican validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-glance validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-ceph-rgw validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-cinder validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-compute-kit > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-heat validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-fm-rest-api > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-horizon validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-telemetry > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-dcdbsync validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating > document [armada/ChartGroup/v1] openstack-psp-rolebinding > validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating > document [armada/Manifest/v1] armada-manifest validate_armada_document > /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 > 2021-06-28 14:40:46.553 68 INFO armada.handlers.armada [-] Performing > pre-flight operations. > 2021-06-28 14:40:46.553 68 DEBUG armada.handlers.tiller [-] Using Tiller > host IP: 127.0.0.1 _get_tiller_ip > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 > 2021-06-28 14:40:46.553 68 DEBUG armada.handlers.tiller [-] Getting Tiller > Status: Tiller exists tiller_status > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:186 > 2021-06-28 14:40:46.553 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/psp-rolebinding-0.1.0.tgz > 2021-06-28 > > 14:40:46.553 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.557 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/ingress-0.1.0.tgz > 2021-06-28 > > 14:40:46.557 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.570 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/helm-toolkit-0.1.0.tgz > 2021-06-28 > > 14:40:46.570 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.581 68 INFO armada.handlers.armada [-] Downloading > tarball from: > > http://192.168.206.1:8080/helm_charts/starlingx/nginx-ports-control-0.1.0.tgz > > 2021-06-28 14:40:46.581 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.584 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/mariadb-0.1.0.tgz > 2021-06-28 > > 14:40:46.584 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.600 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/memcached-0.1.0.tgz > 2021-06-28 > > 14:40:46.600 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.612 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/rabbitmq-0.1.0.tgz > 2021-06-28 > > 14:40:46.612 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.626 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/keystone-0.1.0.tgz > 2021-06-28 > > 14:40:46.626 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.641 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/glance-0.1.0.tgz > 2021-06-28 14:40:46.641 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.658 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/cinder-0.1.0.tgz > 2021-06-28 14:40:46.658 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.674 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/libvirt-0.1.0.tgz > 2021-06-28 > > 14:40:46.674 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.688 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/openvswitch-0.1.0.tgz > 2021-06-28 > > 14:40:46.688 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.702 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/nova-0.1.0.tgz > 2021-06-28 14:40:46.703 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.727 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/nova-api-proxy-0.1.0.tgz > 2021-06-28 > > 14:40:46.727 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.739 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/neutron-0.1.0.tgz > 2021-06-28 > > 14:40:46.739 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.755 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/placement-0.1.0.tgz > 2021-06-28 > > 14:40:46.755 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.768 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/heat-0.1.0.tgz > 2021-06-28 14:40:46.768 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.783 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/fm-rest-api-0.1.0.tgz > 2021-06-28 > > 14:40:46.783 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.796 68 INFO armada.handlers.armada [-] Downloading > tarball from: > http://192.168.206.1:8080/helm_charts/starlingx/horizon-0.1.0.tgz > 2021-06-28 > > 14:40:46.796 68 WARNING armada.handlers.armada [-] Disabling > server validation certs to extract charts > 2021-06-28 14:40:46.810 68 DEBUG armada.handlers.tiller [-] Tiller > ListReleases() with timeout=300, request=limit: 32 > status_codes: UNKNOWN > status_codes: DEPLOYED > status_codes: DELETED > status_codes: DELETING > status_codes: FAILED > status_codes: PENDING_INSTALL > status_codes: PENDING_UPGRADE > status_codes: PENDING_ROLLBACK > get_results > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:215 > 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release > osh-openstack-mariadb, version 1, status: FAILED list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release > osh-openstack-ingress, version 1, status: DEPLOYED list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release > osh-openstack-nginx-ports-control, version 1, status: DEPLOYED > list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release > osh-openstack-psp-rolebinding, version 1, status: DEPLOYED list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release > stx-cephfs-provisioner, version 1, status: DEPLOYED list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release > stx-ceph-pools-audit, version 1, status: DEPLOYED list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release > stx-rbd-provisioner, version 1, status: DEPLOYED list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release > cm-cert-manager-psp-rolebinding, version 1, status: DEPLOYED list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release > cm-cert-manager, version 1, status: DEPLOYED list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release > ic-nginx-ingress, version 1, status: DEPLOYED list_releases > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 > 2021-06-28 14:40:46.833 68 INFO armada.handlers.armada [-] Processing > ChartGroup: openstack-psp-rolebinding (Deploy psp rolebinding), > sequenced=True > 2021-06-28 14:40:46.833 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-psp-rolebinding]: Processing Chart, > release=osh-openstack-psp-rolebinding > 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.wait [-] > [chart=openstack-psp-rolebinding]: Resolved `wait.resources` list: [] > __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 > 2021-06-28 14:40:46.834 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-psp-rolebinding]: Existing release > osh-openstack-psp-rolebinding found in namespace openstack > 2021-06-28 14:40:46.834 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-psp-rolebinding]: Checking for updates to chart release > inputs. > 2021-06-28 14:40:46.835 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-psp-rolebinding]: Found no updates to chart release inputs > 2021-06-28 14:40:46.835 68 INFO armada.handlers.armada [-] All Charts > applied in ChartGroup openstack-psp-rolebinding. > 2021-06-28 14:40:46.835 68 INFO armada.handlers.armada [-] Processing > ChartGroup: openstack-ingress (OpenStack Ingress Controller), > sequenced=False > 2021-06-28 14:40:46.836 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-ingress]: Processing Chart, release=osh-openstack-ingress > 2021-06-28 14:40:46.836 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-nginx-ports-control]: Processing Chart, > release=osh-openstack-nginx-ports-control > 2021-06-28 14:40:46.836 68 DEBUG armada.handlers.wait [-] > [chart=openstack-ingress]: Resolved `wait.resources` list: [{'type': 'job', > 'required': False, 'labels': {'release_group': 'osh-openstack-ingress'}}, > {'type': 'pod', 'labels': {'release_group': 'osh-openstack-ingress'}}] > __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 > 2021-06-28 14:40:46.836 68 DEBUG armada.handlers.wait [-] > [chart=openstack-nginx-ports-control]: Resolved `wait.resources` list: [] > __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 > 2021-06-28 14:40:46.836 68 INFO armada.handlers.chartbuilder [-] > [chart=openstack-ingress]: Building dependency chart helm-toolkit for > release openstack-ingress. > 2021-06-28 14:40:46.838 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-nginx-ports-control]: Existing release > osh-openstack-nginx-ports-control found in namespace openstack > 2021-06-28 14:40:46.839 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-nginx-ports-control]: Checking for updates to chart > release inputs. > 2021-06-28 14:40:46.841 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-nginx-ports-control]: Found no updates to chart release > inputs > 2021-06-28 14:40:46.847 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-ingress]: Existing release osh-openstack-ingress found in > namespace openstack > 2021-06-28 14:40:46.850 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-ingress]: Checking for updates to chart release inputs. > 2021-06-28 14:40:46.903 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-ingress]: Found no updates to chart release inputs > 2021-06-28 14:40:46.903 68 INFO armada.handlers.wait [-] > [chart=openstack-ingress]: Waiting for resource type=job, > namespace=openstack labels=release_group=osh-openstack-ingress > required=False for 1800s > 2021-06-28 14:40:46.903 68 DEBUG armada.handlers.wait [-] > [chart=openstack-ingress]: Starting to wait on: namespace=openstack, > resource type=job, label_selector=(release_group=osh-openstack-ingress), > timeout=1800 _watch_resource_completions > /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 > 2021-06-28 14:40:46.907 68 DEBUG armada.handlers.wait [-] > [chart=openstack-ingress]: Skipping non-required wait, no job resources > found. _watch_resource_completions > /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:386 > 2021-06-28 14:40:46.908 68 INFO armada.handlers.wait [-] > [chart=openstack-ingress]: Waiting for resource type=pod, > namespace=openstack labels=release_group=osh-openstack-ingress > required=True for 1800s > 2021-06-28 14:40:46.908 68 DEBUG armada.handlers.wait [-] > [chart=openstack-ingress]: Starting to wait on: namespace=openstack, > resource type=pod, label_selector=(release_group=osh-openstack-ingress), > timeout=1800 _watch_resource_completions > /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 > 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] > [chart=openstack-ingress]: pod ingress-7754d468d-d7dz4 is ready! > handle_resource > /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 > 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] > [chart=openstack-ingress]: pod ingress-error-pages-75dd8b57d8-hbdfb is > ready! handle_resource > /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 > 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] > [chart=openstack-ingress]: Found no modified resources. wait > /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:302 > 2021-06-28 14:40:46.919 68 INFO armada.handlers.armada [-] All Charts > applied in ChartGroup openstack-ingress. > 2021-06-28 14:40:46.919 68 INFO armada.handlers.armada [-] Processing > ChartGroup: openstack-mariadb (Mariadb), sequenced=True > 2021-06-28 14:40:46.919 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-mariadb]: Processing Chart, release=osh-openstack-mariadb > 2021-06-28 14:40:46.920 68 DEBUG armada.handlers.wait [-] > [chart=openstack-mariadb]: Resolved `wait.resources` list: [{'type': 'job', > 'required': False, 'labels': {'release_group': 'osh-openstack-mariadb'}}, > {'type': 'pod', 'labels': {'release_group': 'osh-openstack-mariadb'}}] > __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 > 2021-06-28 14:40:46.920 68 INFO armada.handlers.chartbuilder [-] > [chart=openstack-mariadb]: Building dependency chart helm-toolkit for > release openstack-mariadb. > 2021-06-28 14:40:46.928 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-mariadb]: Purging release osh-openstack-mariadb with > status FAILED > 2021-06-28 14:40:46.928 68 INFO armada.handlers.tiller [-] > [chart=openstack-mariadb]: Delete osh-openstack-mariadb release with > disable_hooks=False, purge=True, timeout=300 flags > 2021-06-28 14:40:47.384 68 INFO armada.handlers.chart_deploy [-] > [chart=openstack-mariadb]: Installing release osh-openstack-mariadb in > namespace openstack, wait=True, timeout=1800s > 2021-06-28 14:40:47.387 68 INFO armada.handlers.tiller [-] > [chart=openstack-mariadb]: Helm install release: wait=True, timeout=1800 > 2021-06-28 14:41:46.445 68 DEBUG armada.handlers.lock [-] Updating lock > update_lock > /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 > 2021-06-28 14:42:46.508 68 DEBUG armada.handlers.lock [-] Updating lock > update_lock > /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 > [sysadmin at controller-0 ~(keystone_admin)]$ > > > > > > -- > Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Mon Jun 28 17:13:54 2021 From: lists at optimcloud.com (Embedded Devel) Date: Mon, 28 Jun 2021 17:13:54 +0000 Subject: [Starlingx-discuss] fails to deploy openstack In-Reply-To: References: Message-ID: <1624900425084.144021533.2106006065@optimcloud.com> [sysadmin at controller-0 ~(keystone_admin)]$ kubectl describe pod mariadb-server-0 -n openstack Name: mariadb-server-0 Namespace: openstack Priority: 0 Node: Labels: application=mariadb component=server controller-revision-hash=mariadb-server-5dc96f4645 release_group=osh-openstack-mariadb statefulset.kubernetes.io/pod-name=mariadb-server-0 Annotations: configmap-bin-hash: c7bbcd0d5c26e095ef66f9d8387555bbee4c9baaf1a3f18ae5d4eef7041cc987 configmap-etc-hash: 3a2301819580752ada82bdf35edfa8e018b79bf5d0d5d2a5253acef6057cfd4d mariadb-dbadmin-password-hash: 503e2ef2dadb3c74192749156bfb35fbe1c0ac9dc5e9e0c00eb5b473e64cfc5c mariadb-sst-password-hash: 503e2ef2dadb3c74192749156bfb35fbe1c0ac9dc5e9e0c00eb5b473e64cfc5c openstackhelm.openstack.org/release_uuid: Status: Pending IP: IPs: Controlled By: StatefulSet/mariadb-server Init Containers: init: Image: registry.local:9001/quay.io/airshipit/kubernetes-entrypoint:v1.0.0 Port: Host Port: Command: kubernetes-entrypoint Environment: POD_NAME: mariadb-server-0 (v1:metadata.name) NAMESPACE: openstack (v1:metadata.namespace) INTERFACE_NAME: eth0 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ DEPENDENCY_SERVICE: DEPENDENCY_DAEMONSET: DEPENDENCY_CONTAINER: DEPENDENCY_POD_JSON: DEPENDENCY_CUSTOM_RESOURCE: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from osh-openstack-mariadb-mariadb-token-d26pq (ro) mariadb-perms: Image: registry.local:9001/docker.io/openstackhelm/mariadb:ubuntu_xenial-20200303 Port: Host Port: Command: chown -R mysql:mysql /var/lib/mysql Environment: Mounts: /tmp from pod-tmp (rw) /var/lib/mysql from mysql-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from osh-openstack-mariadb-mariadb-token-d26pq (ro) Containers: mariadb: Image: registry.local:9001/docker.io/openstackhelm/mariadb:ubuntu_xenial-20200303 Ports: 3306/TCP, 4567/TCP Host Ports: 0/TCP, 0/TCP Command: /tmp/start.py Readiness: exec [/tmp/readiness.sh] delay=30s timeout=15s period=30s #success=1 #failure=3 Startup: exec [/tmp/readiness.sh] delay=30s timeout=1s period=30s #success=1 #failure=3 Environment: POD_NAMESPACE: openstack (v1:metadata.namespace) MARIADB_REPLICAS: 1 POD_NAME_PREFIX: mariadb-server DISCOVERY_DOMAIN: mariadb-discovery.openstack.svc.cluster.local DIRECT_SVC_NAME: mariadb-server WSREP_PORT: 4567 STATE_CONFIGMAP: osh-openstack-mariadb-mariadb-state MYSQL_DBADMIN_USERNAME: root MYSQL_DBADMIN_PASSWORD: Optional: false MYSQL_DBSST_USERNAME: sst MYSQL_DBSST_PASSWORD: Optional: false MYSQL_DBAUDIT_USERNAME: audit MYSQL_DBAUDIT_PASSWORD: Optional: false Mounts: /etc/mysql/admin_user.cnf from mariadb-secrets (ro,path="admin_user.cnf") /etc/mysql/conf.d from mycnfd (rw) /etc/mysql/conf.d/00-base.cnf from mariadb-etc (ro,path="00-base.cnf") /etc/mysql/conf.d/99-force.cnf from mariadb-etc (ro,path="99-force.cnf") /etc/mysql/my.cnf from mariadb-etc (ro,path="my.cnf") /tmp from pod-tmp (rw) /tmp/readiness.sh from mariadb-bin (ro,path="readiness.sh") /tmp/start.py from mariadb-bin (ro,path="start.py") /tmp/stop.sh from mariadb-bin (ro,path="stop.sh") /var/lib/mysql from mysql-data (rw) /var/run/mysqld from var-run (rw) /var/run/secrets/kubernetes.io/serviceaccount from osh-openstack-mariadb-mariadb-token-d26pq (ro) Conditions: Type Status PodScheduled False Volumes: mysql-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mysql-data-mariadb-server-0 ReadOnly: false pod-tmp: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: mycnfd: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: var-run: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: mariadb-bin: Type: ConfigMap (a volume populated by a ConfigMap) Name: mariadb-bin Optional: false mariadb-etc: Type: ConfigMap (a volume populated by a ConfigMap) Name: mariadb-etc Optional: false mariadb-secrets: Type: Secret (a volume populated by a Secret) SecretName: mariadb-secrets Optional: false osh-openstack-mariadb-mariadb-token-d26pq: Type: Secret (a volume populated by a Secret) SecretName: osh-openstack-mariadb-mariadb-token-d26pq Optional: false QoS Class: BestEffort Node-Selectors: openstack-control-plane=enabled Tolerations: :NoExecute Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m35s (x81 over 98m) default-scheduler running "VolumeBinding" filter plugin for pod "mariadb-server-0": pod has unbound immediate PersistentVolumeClaims On Tuesday 29 June 2021 00:09:48 AM (+07:00), Douglas Lopes Pereira wrote: Hi, we would need to check why the mariadb-server-0 pod is in a pending state. What is the output for the following command? kubectl describe pod mariadb-server-0 -n openstack Regards, Doug On Mon, Jun 28, 2021 at 1:12 PM Embedded Devel wrote: [Please note: This e-mail is from an EXTERNAL e-mail address] stx simplex 5.0 bare metal fails to deploy openstack name | stx-openstack | progress | operation aborted, check logs for detail | kubectl get pods -n openstack openstack ingress-7754d468d-t9wvh 1/1 Running 0 35m openstack ingress-error-pages-75dd8b57d8-d6rhk 1/1 Running 0 35m openstack mariadb-ingress-6b9f6964f5-7l77l 0/1 Running 0 35m openstack mariadb-ingress-error-pages-86c79d7dd4-6nzpw 1/1 Running 0 35m openstack mariadb-server-0 0/1 Pending 0 35m logs say cat /var/log/armada/stx-openstack-apply_2021-06-28-14-40-43.log 2021-06-28 14:40:46.085 68 DEBUG armada.handlers.document [-] Resolving reference /tmp/manifests/stx-openstack/1.0-83-centos-stable-versioned/stx-openstack-stx-openstack.yaml. resolve_reference /usr/local/lib/python3.6/dist-packages/armada/handlers/document.py:49 2021-06-28 14:40:46.369 68 DEBUG armada.handlers.tiller [-] Using Tiller host IP: 127.0.0.1 _get_tiller_ip /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 2021-06-28 14:40:46.370 68 DEBUG armada.handlers.tiller [-] Using Tiller host port: 24134 _get_tiller_port /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:174 2021-06-28 14:40:46.370 68 DEBUG armada.handlers.tiller [-] Tiller getting gRPC insecure channel at 127.0.0.1:24134 with options: [grpc.max_send_message_length=429496729, grpc.max_receive_message_length=429496729] get_channel /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:127 2021-06-28 14:40:46.375 68 DEBUG armada.handlers.tiller [-] Armada is using Tiller at: 127.0.0.1:24134, namespace=kube-system, timeout=300 __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:107 2021-06-28 14:40:46.375 68 INFO armada.handlers.lock [-] Acquiring lock 2021-06-28 14:40:46.502 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-psp-rolebinding validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.502 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] helm-toolkit validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.503 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ingress validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.503 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nginx-ports-control validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.504 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-mariadb validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.504 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-garbd validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.505 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-memcached validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.505 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-rabbitmq validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.506 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-keystone validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.506 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-keystone-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.507 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-barbican validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.507 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ceph-rgw validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.508 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-glance validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.508 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-cinder validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.509 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-libvirt validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.509 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-openvswitch validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.510 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nova validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.510 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-placement validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.511 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nova-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.511 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-neutron validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.512 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ironic validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.512 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-heat validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.513 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-aodh validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.513 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-gnocchi validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.514 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-panko validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.514 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ceilometer validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.515 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-fm-rest-api validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.515 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-horizon validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.516 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-dcdbsync validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.516 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-ingress validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-mariadb validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-memcached validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-rabbitmq validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-keystone validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-keystone-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-barbican validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-glance validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-ceph-rgw validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-cinder validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-compute-kit validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-heat validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-fm-rest-api validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-horizon validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-telemetry validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-dcdbsync validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.521 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-psp-rolebinding validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.521 68 DEBUG armada.utils.validate [-] Validating document [armada/Manifest/v1] armada-manifest validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-psp-rolebinding validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] helm-toolkit validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ingress validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.527 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nginx-ports-control validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.527 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-mariadb validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.528 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-garbd validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.528 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-memcached validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.529 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-rabbitmq validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.529 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-keystone validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.530 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-keystone-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.530 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-barbican validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.531 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ceph-rgw validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.531 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-glance validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.532 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-cinder validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.532 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-libvirt validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.533 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-openvswitch validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.533 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nova validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.534 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-placement validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.534 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-nova-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.535 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-neutron validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.535 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ironic validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.536 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-heat validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.536 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-aodh validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.537 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-gnocchi validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.537 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-panko validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.538 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-ceilometer validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.538 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-fm-rest-api validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.539 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-horizon validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.539 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] openstack-dcdbsync validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-ingress validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-mariadb validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-memcached validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-rabbitmq validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-keystone validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-keystone-api-proxy validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-barbican validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-glance validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-ceph-rgw validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-cinder validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-compute-kit validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-heat validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-fm-rest-api validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-horizon validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-telemetry validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-dcdbsync validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] openstack-psp-rolebinding validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating document [armada/Manifest/v1] armada-manifest validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-28 14:40:46.553 68 INFO armada.handlers.armada [-] Performing pre-flight operations. 2021-06-28 14:40:46.553 68 DEBUG armada.handlers.tiller [-] Using Tiller host IP: 127.0.0.1 _get_tiller_ip /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 2021-06-28 14:40:46.553 68 DEBUG armada.handlers.tiller [-] Getting Tiller Status: Tiller exists tiller_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:186 2021-06-28 14:40:46.553 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/psp-rolebinding-0.1.0.tgz 2021-06-28 14:40:46.553 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.557 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/ingress-0.1.0.tgz 2021-06-28 14:40:46.557 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.570 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/helm-toolkit-0.1.0.tgz 2021-06-28 14:40:46.570 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.581 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/nginx-ports-control-0.1.0.tgz 2021-06-28 14:40:46.581 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.584 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/mariadb-0.1.0.tgz 2021-06-28 14:40:46.584 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.600 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/memcached-0.1.0.tgz 2021-06-28 14:40:46.600 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.612 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/rabbitmq-0.1.0.tgz 2021-06-28 14:40:46.612 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.626 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/keystone-0.1.0.tgz 2021-06-28 14:40:46.626 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.641 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/glance-0.1.0.tgz 2021-06-28 14:40:46.641 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.658 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/cinder-0.1.0.tgz 2021-06-28 14:40:46.658 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.674 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/libvirt-0.1.0.tgz 2021-06-28 14:40:46.674 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.688 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/openvswitch-0.1.0.tgz 2021-06-28 14:40:46.688 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.702 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/nova-0.1.0.tgz 2021-06-28 14:40:46.703 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.727 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/nova-api-proxy-0.1.0.tgz 2021-06-28 14:40:46.727 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.739 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/neutron-0.1.0.tgz 2021-06-28 14:40:46.739 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.755 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/placement-0.1.0.tgz 2021-06-28 14:40:46.755 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.768 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/heat-0.1.0.tgz 2021-06-28 14:40:46.768 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.783 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/fm-rest-api-0.1.0.tgz 2021-06-28 14:40:46.783 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.796 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/starlingx/horizon-0.1.0.tgz 2021-06-28 14:40:46.796 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-28 14:40:46.810 68 DEBUG armada.handlers.tiller [-] Tiller ListReleases() with timeout=300, request=limit: 32 status_codes: UNKNOWN status_codes: DEPLOYED status_codes: DELETED status_codes: DELETING status_codes: FAILED status_codes: PENDING_INSTALL status_codes: PENDING_UPGRADE status_codes: PENDING_ROLLBACK get_results /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:215 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-mariadb, version 1, status: FAILED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-ingress, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-nginx-ports-control, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-psp-rolebinding, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release stx-cephfs-provisioner, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release stx-ceph-pools-audit, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release stx-rbd-provisioner, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release cm-cert-manager-psp-rolebinding, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release cm-cert-manager, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release ic-nginx-ingress, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-28 14:40:46.833 68 INFO armada.handlers.armada [-] Processing ChartGroup: openstack-psp-rolebinding (Deploy psp rolebinding), sequenced=True 2021-06-28 14:40:46.833 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-psp-rolebinding]: Processing Chart, release=osh-openstack-psp-rolebinding 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.wait [-] [chart=openstack-psp-rolebinding]: Resolved `wait.resources` list: [] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-28 14:40:46.834 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-psp-rolebinding]: Existing release osh-openstack-psp-rolebinding found in namespace openstack 2021-06-28 14:40:46.834 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-psp-rolebinding]: Checking for updates to chart release inputs. 2021-06-28 14:40:46.835 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-psp-rolebinding]: Found no updates to chart release inputs 2021-06-28 14:40:46.835 68 INFO armada.handlers.armada [-] All Charts applied in ChartGroup openstack-psp-rolebinding. 2021-06-28 14:40:46.835 68 INFO armada.handlers.armada [-] Processing ChartGroup: openstack-ingress (OpenStack Ingress Controller), sequenced=False 2021-06-28 14:40:46.836 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-ingress]: Processing Chart, release=osh-openstack-ingress 2021-06-28 14:40:46.836 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-nginx-ports-control]: Processing Chart, release=osh-openstack-nginx-ports-control 2021-06-28 14:40:46.836 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Resolved `wait.resources` list: [{'type': 'job', 'required': False, 'labels': {'release_group': 'osh-openstack-ingress'}}, {'type': 'pod', 'labels': {'release_group': 'osh-openstack-ingress'}}] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-28 14:40:46.836 68 DEBUG armada.handlers.wait [-] [chart=openstack-nginx-ports-control]: Resolved `wait.resources` list: [] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-28 14:40:46.836 68 INFO armada.handlers.chartbuilder [-] [chart=openstack-ingress]: Building dependency chart helm-toolkit for release openstack-ingress. 2021-06-28 14:40:46.838 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-nginx-ports-control]: Existing release osh-openstack-nginx-ports-control found in namespace openstack 2021-06-28 14:40:46.839 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-nginx-ports-control]: Checking for updates to chart release inputs. 2021-06-28 14:40:46.841 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-nginx-ports-control]: Found no updates to chart release inputs 2021-06-28 14:40:46.847 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-ingress]: Existing release osh-openstack-ingress found in namespace openstack 2021-06-28 14:40:46.850 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-ingress]: Checking for updates to chart release inputs. 2021-06-28 14:40:46.903 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-ingress]: Found no updates to chart release inputs 2021-06-28 14:40:46.903 68 INFO armada.handlers.wait [-] [chart=openstack-ingress]: Waiting for resource type=job, namespace=openstack labels=release_group=osh-openstack-ingress required=False for 1800s 2021-06-28 14:40:46.903 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Starting to wait on: namespace=openstack, resource type=job, label_selector=(release_group=osh-openstack-ingress), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-28 14:40:46.907 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Skipping non-required wait, no job resources found. _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:386 2021-06-28 14:40:46.908 68 INFO armada.handlers.wait [-] [chart=openstack-ingress]: Waiting for resource type=pod, namespace=openstack labels=release_group=osh-openstack-ingress required=True for 1800s 2021-06-28 14:40:46.908 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Starting to wait on: namespace=openstack, resource type=pod, label_selector=(release_group=osh-openstack-ingress), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: pod ingress-7754d468d-d7dz4 is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: pod ingress-error-pages-75dd8b57d8-hbdfb is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] [chart=openstack-ingress]: Found no modified resources. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:302 2021-06-28 14:40:46.919 68 INFO armada.handlers.armada [-] All Charts applied in ChartGroup openstack-ingress. 2021-06-28 14:40:46.919 68 INFO armada.handlers.armada [-] Processing ChartGroup: openstack-mariadb (Mariadb), sequenced=True 2021-06-28 14:40:46.919 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-mariadb]: Processing Chart, release=osh-openstack-mariadb 2021-06-28 14:40:46.920 68 DEBUG armada.handlers.wait [-] [chart=openstack-mariadb]: Resolved `wait.resources` list: [{'type': 'job', 'required': False, 'labels': {'release_group': 'osh-openstack-mariadb'}}, {'type': 'pod', 'labels': {'release_group': 'osh-openstack-mariadb'}}] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-28 14:40:46.920 68 INFO armada.handlers.chartbuilder [-] [chart=openstack-mariadb]: Building dependency chart helm-toolkit for release openstack-mariadb. 2021-06-28 14:40:46.928 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-mariadb]: Purging release osh-openstack-mariadb with status FAILED 2021-06-28 14:40:46.928 68 INFO armada.handlers.tiller [-] [chart=openstack-mariadb]: Delete osh-openstack-mariadb release with disable_hooks=False, purge=True, timeout=300 flags 2021-06-28 14:40:47.384 68 INFO armada.handlers.chart_deploy [-] [chart=openstack-mariadb]: Installing release osh-openstack-mariadb in namespace openstack, wait=True, timeout=1800s 2021-06-28 14:40:47.387 68 INFO armada.handlers.tiller [-] [chart=openstack-mariadb]: Helm install release: wait=True, timeout=1800 2021-06-28 14:41:46.445 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-28 14:42:46.508 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 [sysadmin at controller-0 ~(keystone_admin)]$ -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.pereira at windriver.com Mon Jun 28 18:23:04 2021 From: douglas.pereira at windriver.com (Douglas Lopes Pereira) Date: Mon, 28 Jun 2021 15:23:04 -0300 Subject: [Starlingx-discuss] fails to deploy openstack In-Reply-To: <1624900425084.144021533.2106006065@optimcloud.com> References: <1624900425084.144021533.2106006065@optimcloud.com> Message-ID: The next step is to understand why the Persistent Volume Claim didn't work as indicated in the Events section for your last command. Can you show us the result for the following command? kubectl describe pvc mysql-data-mariadb-server-0 -n openstack Regards, Doug On Mon, Jun 28, 2021 at 2:13 PM Embedded Devel wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > [sysadmin at controller-0 ~(keystone_admin)]$ kubectl describe pod > mariadb-server-0 -n openstack > Name: mariadb-server-0 > Namespace: openstack > Priority: 0 > Node: > Labels: application=mariadb > component=server > controller-revision-hash=mariadb-server-5dc96f4645 > release_group=osh-openstack-mariadb > statefulset.kubernetes.io/pod-name=mariadb-server-0 > > Annotations: configmap-bin-hash: > c7bbcd0d5c26e095ef66f9d8387555bbee4c9baaf1a3f18ae5d4eef7041cc987 > configmap-etc-hash: > 3a2301819580752ada82bdf35edfa8e018b79bf5d0d5d2a5253acef6057cfd4d > mariadb-dbadmin-password-hash: > 503e2ef2dadb3c74192749156bfb35fbe1c0ac9dc5e9e0c00eb5b473e64cfc5c > mariadb-sst-password-hash: > 503e2ef2dadb3c74192749156bfb35fbe1c0ac9dc5e9e0c00eb5b473e64cfc5c > openstackhelm.openstack.org/release_uuid: > > Status: Pending > IP: > IPs: > Controlled By: StatefulSet/mariadb-server > Init Containers: > init: > Image: registry.local:9001/ > quay.io/airshipit/kubernetes-entrypoint:v1.0.0 > Port: > Host Port: > Command: > kubernetes-entrypoint > Environment: > POD_NAME: mariadb-server-0 (v1:metadata.name) > NAMESPACE: openstack (v1:metadata.namespace) > INTERFACE_NAME: eth0 > PATH: > /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/ > DEPENDENCY_SERVICE: > DEPENDENCY_DAEMONSET: > DEPENDENCY_CONTAINER: > DEPENDENCY_POD_JSON: > DEPENDENCY_CUSTOM_RESOURCE: > Mounts: > /var/run/secrets/kubernetes.io/serviceaccount > > from osh-openstack-mariadb-mariadb-token-d26pq (ro) > mariadb-perms: > Image: registry.local:9001/ > docker.io/openstackhelm/mariadb:ubuntu_xenial-20200303 > Port: > Host Port: > Command: > chown > -R > mysql:mysql > /var/lib/mysql > Environment: > Mounts: > /tmp from pod-tmp (rw) > /var/lib/mysql from mysql-data (rw) > /var/run/secrets/kubernetes.io/serviceaccount > > from osh-openstack-mariadb-mariadb-token-d26pq (ro) > Containers: > mariadb: > Image: registry.local:9001/ > docker.io/openstackhelm/mariadb:ubuntu_xenial-20200303 > Ports: 3306/TCP, 4567/TCP > Host Ports: 0/TCP, 0/TCP > Command: > /tmp/start.py > > Readiness: exec [/tmp/readiness.sh > ] > delay=30s timeout=15s period=30s #success=1 #failure=3 > Startup: exec [/tmp/readiness.sh > ] > delay=30s timeout=1s period=30s #success=1 #failure=3 > Environment: > POD_NAMESPACE: openstack (v1:metadata.namespace) > MARIADB_REPLICAS: 1 > POD_NAME_PREFIX: mariadb-server > DISCOVERY_DOMAIN: > mariadb-discovery.openstack.svc.cluster.local > DIRECT_SVC_NAME: mariadb-server > WSREP_PORT: 4567 > STATE_CONFIGMAP: osh-openstack-mariadb-mariadb-state > MYSQL_DBADMIN_USERNAME: root > MYSQL_DBADMIN_PASSWORD: secret 'mariadb-dbadmin-password'> Optional: false > MYSQL_DBSST_USERNAME: sst > MYSQL_DBSST_PASSWORD: secret 'mariadb-dbsst-password'> Optional: false > MYSQL_DBAUDIT_USERNAME: audit > MYSQL_DBAUDIT_PASSWORD: secret 'mariadb-dbaudit-password'> Optional: false > Mounts: > /etc/mysql/admin_user.cnf from mariadb-secrets > (ro,path="admin_user.cnf") > /etc/mysql/conf.d from mycnfd (rw) > /etc/mysql/conf.d/00-base.cnf from mariadb-etc > (ro,path="00-base.cnf") > /etc/mysql/conf.d/99-force.cnf from mariadb-etc > (ro,path="99-force.cnf") > /etc/mysql/my.cnf from mariadb-etc (ro,path="my.cnf") > /tmp from pod-tmp (rw) > /tmp/readiness.sh > > from mariadb-bin (ro,path="readiness.sh > > ") > /tmp/start.py > > from mariadb-bin (ro,path="start.py > > ") > /tmp/stop.sh > > from mariadb-bin (ro,path="stop.sh > > ") > /var/lib/mysql from mysql-data (rw) > /var/run/mysqld from var-run (rw) > /var/run/secrets/kubernetes.io/serviceaccount > > from osh-openstack-mariadb-mariadb-token-d26pq (ro) > Conditions: > Type Status > PodScheduled False > Volumes: > mysql-data: > Type: PersistentVolumeClaim (a reference to a > PersistentVolumeClaim in the same namespace) > ClaimName: mysql-data-mariadb-server-0 > ReadOnly: false > pod-tmp: > Type: EmptyDir (a temporary directory that shares a pod's > lifetime) > Medium: > SizeLimit: > mycnfd: > Type: EmptyDir (a temporary directory that shares a pod's > lifetime) > Medium: > SizeLimit: > var-run: > Type: EmptyDir (a temporary directory that shares a pod's > lifetime) > Medium: > SizeLimit: > mariadb-bin: > Type: ConfigMap (a volume populated by a ConfigMap) > Name: mariadb-bin > Optional: false > mariadb-etc: > Type: ConfigMap (a volume populated by a ConfigMap) > Name: mariadb-etc > Optional: false > mariadb-secrets: > Type: Secret (a volume populated by a Secret) > SecretName: mariadb-secrets > Optional: false > osh-openstack-mariadb-mariadb-token-d26pq: > Type: Secret (a volume populated by a Secret) > SecretName: osh-openstack-mariadb-mariadb-token-d26pq > Optional: false > QoS Class: BestEffort > Node-Selectors: openstack-control-plane=enabled > Tolerations: :NoExecute > Events: > Type Reason Age From > Message > ---- ------ ---- ---- > ------- > Warning FailedScheduling 2m35s (x81 over 98m) default-scheduler > running "VolumeBinding" filter plugin for pod "mariadb-server-0": pod has > unbound immediate PersistentVolumeClaims > > > On Tuesday 29 June 2021 00:09:48 AM (+07:00), Douglas Lopes Pereira wrote: > > Hi, > > we would need to check why the mariadb-server-0 pod is in a pending state. > What is the output for the following command? > > kubectl describe pod mariadb-server-0 -n openstack > > Regards, > Doug > > On Mon, Jun 28, 2021 at 1:12 PM Embedded Devel > wrote: > >> [Please note: This e-mail is from an EXTERNAL e-mail address] >> >> stx simplex 5.0 bare metal fails to deploy openstack >> >> name | stx-openstack | >> progress | operation aborted, check logs for detail | >> >> kubectl get pods -n openstack >> >> openstack ingress-7754d468d-t9wvh 1/1 >> Running 0 35m >> openstack ingress-error-pages-75dd8b57d8-d6rhk 1/1 >> Running 0 35m >> openstack mariadb-ingress-6b9f6964f5-7l77l 0/1 >> Running 0 35m >> openstack mariadb-ingress-error-pages-86c79d7dd4-6nzpw 1/1 >> Running 0 35m >> openstack mariadb-server-0 0/1 >> Pending 0 35m >> >> >> logs say >> cat /var/log/armada/stx-openstack-apply_2021-06-28-14-40-43.log >> 2021-06-28 14:40:46.085 68 DEBUG armada.handlers.document [-] Resolving >> reference >> >> /tmp/manifests/stx-openstack/1.0-83-centos-stable-versioned/stx-openstack-stx-openstack.yaml. >> resolve_reference >> /usr/local/lib/python3.6/dist-packages/armada/handlers/document.py >> >> :49 >> 2021-06-28 14:40:46.369 68 DEBUG armada.handlers.tiller [-] Using Tiller >> host IP: 127.0.0.1 >> >> _get_tiller_ip >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :165 >> 2021-06-28 14:40:46.370 68 DEBUG armada.handlers.tiller [-] Using Tiller >> host port: 24134 _get_tiller_port >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :174 >> 2021-06-28 14:40:46.370 68 DEBUG armada.handlers.tiller [-] Tiller getting >> gRPC insecure channel at 127.0.0.1:24134 >> >> with options: >> [grpc.max_send_message_length=429496729, >> grpc.max_receive_message_length=429496729] get_channel >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :127 >> 2021-06-28 14:40:46.375 68 DEBUG armada.handlers.tiller [-] Armada is >> using >> Tiller at: 127.0.0.1:24134 >> , >> namespace=kube-system, timeout=300 __init__ >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :107 >> 2021-06-28 14:40:46.375 68 INFO armada.handlers.lock [-] Acquiring lock >> 2021-06-28 14:40:46.502 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-psp-rolebinding >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.502 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] helm-toolkit validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.503 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-ingress validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.503 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-nginx-ports-control >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.504 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-mariadb validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.504 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-garbd validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.505 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-memcached validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.505 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-rabbitmq validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.506 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-keystone validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.506 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-keystone-api-proxy >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.507 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-barbican validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.507 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-ceph-rgw validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.508 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-glance validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.508 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-cinder validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.509 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-libvirt validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.509 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-openvswitch validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.510 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-nova validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.510 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-placement validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.511 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-nova-api-proxy >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.511 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-neutron validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.512 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-ironic validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.512 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-heat validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.513 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-aodh validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.513 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-gnocchi validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.514 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-panko validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.514 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-ceilometer validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.515 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-fm-rest-api validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.515 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-horizon validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.516 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-dcdbsync validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.516 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-ingress validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-mariadb validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-memcached >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-rabbitmq >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.517 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-keystone >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-keystone-api-proxy >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-barbican >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-glance validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.518 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-ceph-rgw >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-cinder validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-compute-kit >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.519 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-heat validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-fm-rest-api >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-horizon validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-telemetry >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.520 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-dcdbsync >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.521 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-psp-rolebinding >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.521 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Manifest/v1] armada-manifest validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-psp-rolebinding >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] helm-toolkit validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.526 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-ingress validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.527 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-nginx-ports-control >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.527 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-mariadb validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.528 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-garbd validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.528 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-memcached validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.529 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-rabbitmq validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.529 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-keystone validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.530 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-keystone-api-proxy >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.530 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-barbican validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.531 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-ceph-rgw validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.531 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-glance validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.532 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-cinder validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.532 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-libvirt validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.533 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-openvswitch validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.533 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-nova validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.534 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-placement validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.534 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-nova-api-proxy >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.535 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-neutron validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.535 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-ironic validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.536 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-heat validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.536 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-aodh validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.537 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-gnocchi validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.537 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-panko validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.538 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-ceilometer validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.538 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-fm-rest-api validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.539 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-horizon validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.539 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Chart/v1] openstack-dcdbsync validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-ingress validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-mariadb validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-memcached >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.540 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-rabbitmq >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-keystone >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-keystone-api-proxy >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.541 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-barbican >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-glance validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-ceph-rgw >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-cinder validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.542 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-compute-kit >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-heat validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-fm-rest-api >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-horizon validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.543 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-telemetry >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-dcdbsync >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating >> document [armada/ChartGroup/v1] openstack-psp-rolebinding >> validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.544 68 DEBUG armada.utils.validate [-] Validating >> document [armada/Manifest/v1] armada-manifest validate_armada_document >> /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py >> >> :108 >> 2021-06-28 14:40:46.553 68 INFO armada.handlers.armada [-] Performing >> pre-flight operations. >> 2021-06-28 14:40:46.553 68 DEBUG armada.handlers.tiller [-] Using Tiller >> host IP: 127.0.0.1 >> >> _get_tiller_ip >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :165 >> 2021-06-28 14:40:46.553 68 DEBUG armada.handlers.tiller [-] Getting Tiller >> Status: Tiller exists tiller_status >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :186 >> 2021-06-28 14:40:46.553 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/psp-rolebinding-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.553 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.557 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/ingress-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.557 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.570 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/helm-toolkit-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.570 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.581 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> >> http://192.168.206.1:8080/helm_charts/starlingx/nginx-ports-control-0.1.0.tgz >> >> >> 2021-06-28 14:40:46.581 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.584 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/mariadb-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.584 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.600 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/memcached-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.600 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.612 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/rabbitmq-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.612 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.626 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/keystone-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.626 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.641 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/glance-0.1.0.tgz >> >> 2021-06-28 14:40:46.641 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.658 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/cinder-0.1.0.tgz >> >> 2021-06-28 14:40:46.658 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.674 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/libvirt-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.674 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.688 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/openvswitch-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.688 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.702 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/nova-0.1.0.tgz >> >> 2021-06-28 14:40:46.703 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.727 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/nova-api-proxy-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.727 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.739 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/neutron-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.739 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.755 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/placement-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.755 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.768 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/heat-0.1.0.tgz >> >> 2021-06-28 14:40:46.768 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.783 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/fm-rest-api-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.783 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.796 68 INFO armada.handlers.armada [-] Downloading >> tarball from: >> http://192.168.206.1:8080/helm_charts/starlingx/horizon-0.1.0.tgz >> 2021-06-28 >> >> 14:40:46.796 68 WARNING armada.handlers.armada [-] Disabling >> server validation certs to extract charts >> 2021-06-28 14:40:46.810 68 DEBUG armada.handlers.tiller [-] Tiller >> ListReleases() with timeout=300, request=limit: 32 >> status_codes: UNKNOWN >> status_codes: DEPLOYED >> status_codes: DELETED >> status_codes: DELETING >> status_codes: FAILED >> status_codes: PENDING_INSTALL >> status_codes: PENDING_UPGRADE >> status_codes: PENDING_ROLLBACK >> get_results >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :215 >> 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release >> osh-openstack-mariadb, version 1, status: FAILED list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release >> osh-openstack-ingress, version 1, status: DEPLOYED list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release >> osh-openstack-nginx-ports-control, version 1, status: DEPLOYED >> list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release >> osh-openstack-psp-rolebinding, version 1, status: DEPLOYED list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release >> stx-cephfs-provisioner, version 1, status: DEPLOYED list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release >> stx-ceph-pools-audit, version 1, status: DEPLOYED list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.832 68 DEBUG armada.handlers.tiller [-] Found release >> stx-rbd-provisioner, version 1, status: DEPLOYED list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release >> cm-cert-manager-psp-rolebinding, version 1, status: DEPLOYED list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release >> cm-cert-manager, version 1, status: DEPLOYED list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.tiller [-] Found release >> ic-nginx-ingress, version 1, status: DEPLOYED list_releases >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py >> >> :276 >> 2021-06-28 14:40:46.833 68 INFO armada.handlers.armada [-] Processing >> ChartGroup: openstack-psp-rolebinding (Deploy psp rolebinding), >> sequenced=True >> 2021-06-28 14:40:46.833 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-psp-rolebinding]: Processing Chart, >> release=osh-openstack-psp-rolebinding >> 2021-06-28 14:40:46.833 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-psp-rolebinding]: Resolved `wait.resources` list: [] >> __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :89 >> 2021-06-28 14:40:46.834 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-psp-rolebinding]: Existing release >> osh-openstack-psp-rolebinding found in namespace openstack >> 2021-06-28 14:40:46.834 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-psp-rolebinding]: Checking for updates to chart release >> inputs. >> 2021-06-28 14:40:46.835 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-psp-rolebinding]: Found no updates to chart release >> inputs >> 2021-06-28 14:40:46.835 68 INFO armada.handlers.armada [-] All Charts >> applied in ChartGroup openstack-psp-rolebinding. >> 2021-06-28 14:40:46.835 68 INFO armada.handlers.armada [-] Processing >> ChartGroup: openstack-ingress (OpenStack Ingress Controller), >> sequenced=False >> 2021-06-28 14:40:46.836 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-ingress]: Processing Chart, release=osh-openstack-ingress >> 2021-06-28 14:40:46.836 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-nginx-ports-control]: Processing Chart, >> release=osh-openstack-nginx-ports-control >> 2021-06-28 14:40:46.836 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-ingress]: Resolved `wait.resources` list: [{'type': >> 'job', >> 'required': False, 'labels': {'release_group': 'osh-openstack-ingress'}}, >> {'type': 'pod', 'labels': {'release_group': 'osh-openstack-ingress'}}] >> __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :89 >> 2021-06-28 14:40:46.836 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-nginx-ports-control]: Resolved `wait.resources` list: [] >> __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :89 >> 2021-06-28 14:40:46.836 68 INFO armada.handlers.chartbuilder [-] >> [chart=openstack-ingress]: Building dependency chart helm-toolkit for >> release openstack-ingress. >> 2021-06-28 14:40:46.838 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-nginx-ports-control]: Existing release >> osh-openstack-nginx-ports-control found in namespace openstack >> 2021-06-28 14:40:46.839 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-nginx-ports-control]: Checking for updates to chart >> release inputs. >> 2021-06-28 14:40:46.841 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-nginx-ports-control]: Found no updates to chart release >> inputs >> 2021-06-28 14:40:46.847 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-ingress]: Existing release osh-openstack-ingress found in >> namespace openstack >> 2021-06-28 14:40:46.850 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-ingress]: Checking for updates to chart release inputs. >> 2021-06-28 14:40:46.903 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-ingress]: Found no updates to chart release inputs >> 2021-06-28 14:40:46.903 68 INFO armada.handlers.wait [-] >> [chart=openstack-ingress]: Waiting for resource type=job, >> namespace=openstack labels=release_group=osh-openstack-ingress >> required=False for 1800s >> 2021-06-28 14:40:46.903 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-ingress]: Starting to wait on: namespace=openstack, >> resource type=job, label_selector=(release_group=osh-openstack-ingress), >> timeout=1800 _watch_resource_completions >> /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :367 >> 2021-06-28 14:40:46.907 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-ingress]: Skipping non-required wait, no job resources >> found. _watch_resource_completions >> /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :386 >> 2021-06-28 14:40:46.908 68 INFO armada.handlers.wait [-] >> [chart=openstack-ingress]: Waiting for resource type=pod, >> namespace=openstack labels=release_group=osh-openstack-ingress >> required=True for 1800s >> 2021-06-28 14:40:46.908 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-ingress]: Starting to wait on: namespace=openstack, >> resource type=pod, label_selector=(release_group=osh-openstack-ingress), >> timeout=1800 _watch_resource_completions >> /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :367 >> 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-ingress]: pod ingress-7754d468d-d7dz4 is ready! >> handle_resource >> /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :258 >> 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-ingress]: pod ingress-error-pages-75dd8b57d8-hbdfb is >> ready! handle_resource >> /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :258 >> 2021-06-28 14:40:46.919 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-ingress]: Found no modified resources. wait >> /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :302 >> 2021-06-28 14:40:46.919 68 INFO armada.handlers.armada [-] All Charts >> applied in ChartGroup openstack-ingress. >> 2021-06-28 14:40:46.919 68 INFO armada.handlers.armada [-] Processing >> ChartGroup: openstack-mariadb (Mariadb), sequenced=True >> 2021-06-28 14:40:46.919 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-mariadb]: Processing Chart, release=osh-openstack-mariadb >> 2021-06-28 14:40:46.920 68 DEBUG armada.handlers.wait [-] >> [chart=openstack-mariadb]: Resolved `wait.resources` list: [{'type': >> 'job', >> 'required': False, 'labels': {'release_group': 'osh-openstack-mariadb'}}, >> {'type': 'pod', 'labels': {'release_group': 'osh-openstack-mariadb'}}] >> __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py >> >> :89 >> 2021-06-28 14:40:46.920 68 INFO armada.handlers.chartbuilder [-] >> [chart=openstack-mariadb]: Building dependency chart helm-toolkit for >> release openstack-mariadb. >> 2021-06-28 14:40:46.928 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-mariadb]: Purging release osh-openstack-mariadb with >> status FAILED >> 2021-06-28 14:40:46.928 68 INFO armada.handlers.tiller [-] >> [chart=openstack-mariadb]: Delete osh-openstack-mariadb release with >> disable_hooks=False, purge=True, timeout=300 flags >> 2021-06-28 14:40:47.384 68 INFO armada.handlers.chart_deploy [-] >> [chart=openstack-mariadb]: Installing release osh-openstack-mariadb in >> namespace openstack, wait=True, timeout=1800s >> 2021-06-28 14:40:47.387 68 INFO armada.handlers.tiller [-] >> [chart=openstack-mariadb]: Helm install release: wait=True, timeout=1800 >> 2021-06-28 14:41:46.445 68 DEBUG armada.handlers.lock [-] Updating lock >> update_lock >> /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py >> >> :176 >> 2021-06-28 14:42:46.508 68 DEBUG armada.handlers.lock [-] Updating lock >> update_lock >> /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py >> >> :176 >> [sysadmin at controller-0 ~(keystone_admin)]$ >> >> >> >> >> >> -- >> Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> > > -- > Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian_peal at yahoo.com Mon Jun 28 23:55:46 2021 From: brian_peal at yahoo.com (Scott Peal) Date: Mon, 28 Jun 2021 23:55:46 +0000 (UTC) Subject: [Starlingx-discuss] Installation services? References: <1283957878.1526575.1624924546491.ref@mail.yahoo.com> Message-ID: <1283957878.1526575.1624924546491@mail.yahoo.com> Hello everyone, Does anyone know someone who can install StarlingX with OpenStack/K8m remotely at a fair price? I have tried multiple times but not getting anywhere. Stuck on figuring out how to install the NIC drivers and set up the VLANs correctly. Servers:- 2 controllers- 2 Rook/Ceph storage- 8 hosts/workers- Dell PowerEdge C6100 X5650 dual Xeon hex-cores  Networking:- Dual 10GbE NICs in each server- Dual 1GbE NICs in each server- Dual Dell S6010 switches running FTOS 9.x I am a one man band so not a lot of funds, but getting someone to show me the ropes would be worth it.  Regards, Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Tue Jun 29 02:08:49 2021 From: lists at optimcloud.com (Embedded Devel) Date: Tue, 29 Jun 2021 02:08:49 +0000 Subject: [Starlingx-discuss] Installation services? In-Reply-To: <1283957878.1526575.1624924546491@mail.yahoo.com> References: <1283957878.1526575.1624924546491@mail.yahoo.com> Message-ID: <1624932426966.2251182275.3537273669@optimcloud.com> On Tuesday 29 June 2021 06:55:46 AM (+07:00), Scott Peal wrote: Hello everyone, Does anyone know someone who can install StarlingX with OpenStack/K8m remotely at a fair price? I have tried multiple times but not getting anywhere. Stuck on figuring out how to install the NIC drivers and set up the VLANs correctly. its really pretty straight forward to install, not sure what you mean by "install the NIC drivers", unless they are "unsupported" cards, then its alot of work for building a custom image. so what cards are they that require "installing NIC drivers" ? or are you guessing really ? Servers: - 2 controllers - 2 Rook/Ceph storage - 8 hosts/workers - Dell PowerEdge C6100 X5650 dual Xeon hex-cores Networking: - Dual 10GbE NICs in each server - Dual 1GbE NICs in each server - Dual Dell S6010 switches running FTOS 9.x I am a one man band so not a lot of funds, but getting someone to show me the ropes would be worth it. Regards, Scott -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian_peal at yahoo.com Tue Jun 29 04:02:17 2021 From: brian_peal at yahoo.com (Scott Peal) Date: Tue, 29 Jun 2021 04:02:17 +0000 (UTC) Subject: [Starlingx-discuss] Installation services? In-Reply-To: <1624932426966.2251182275.3537273669@optimcloud.com> References: <1283957878.1526575.1624924546491@mail.yahoo.com> <1624932426966.2251182275.3537273669@optimcloud.com> Message-ID: <670986439.3265291.1624939337384@mail.yahoo.com> The Intel driver instructions needs rpmbuild (per link in first post). I am not able to get "sudo yum install rpm-build" to work. It is going to an "http://container:8080/......" url. However, dns is not resolving for container. So I ran the container creation script using the IP of the 1G NIC. Running the install for rpm-build now says I don't have permissions. Note, ping to google.com resolves and works fine. First question is, do we configure the physical NICs before we run container scripts or after? If after, then how do I get permissions to install rpm-build? If before, how do I get the repo to point to Centos (I tried this too with no luck). Note, I am running the simplex all-in-one bare metal install. I need to test the NIC in one server before I buy all the other cards and cables. Thanks for the advice...Scott On Mon, Jun 28, 2021 at 10:08 PM, Embedded Devel wrote: On Tuesday 29 June 2021 06:55:46 AM (+07:00), Scott Peal wrote: Hello everyone, Does anyone know someone who can install StarlingX with OpenStack/K8m remotely at a fair price? I have tried multiple times but not getting anywhere. Stuck on figuring out how to install the NIC drivers and set up the VLANs correctly. its really pretty straight forward to install, not sure what you mean by "install the NIC drivers", unless they are "unsupported" cards, then its alot of work for building a custom image.so what cards are they that require "installing NIC drivers" ? or are you guessing really ? Servers:- 2 controllers- 2 Rook/Ceph storage- 8 hosts/workers- Dell PowerEdge C6100 X5650 dual Xeon hex-cores  Networking:- Dual 10GbE NICs in each server- Dual 1GbE NICs in each server- Dual Dell S6010 switches running FTOS 9.x I am a one man band so not a lot of funds, but getting someone to show me the ropes would be worth it.  Regards, Scott --  Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Tue Jun 29 04:38:31 2021 From: lists at optimcloud.com (Embedded Devel) Date: Tue, 29 Jun 2021 04:38:31 +0000 Subject: [Starlingx-discuss] Installation services? In-Reply-To: <670986439.3265291.1624939337384@mail.yahoo.com> References: <670986439.3265291.1624939337384@mail.yahoo.com> Message-ID: <1624941395542.10351172.1945250469@optimcloud.com> On Tuesday 29 June 2021 11:02:17 AM (+07:00), Scott Peal wrote: The Intel driver instructions needs rpmbuild (per link in first post). I am not able to get "sudo yum install rpm-build" to work. It is going to an "http://container:8080/......" url. However, dns is not resolving for container. So I ran the container creation script using the IP of the 1G NIC. Running the install for rpm-build now says I don't have permissions. Note, ping to google.com resolves and works fine. First question is, do we configure the physical NICs before we run container scripts or after? If after, then how do I get permissions to install rpm-build? If before, how do I get the repo to point to Centos (I tried this too with no luck). Note, I am running the simplex all-in-one bare metal install. I need to test the NIC in one server before I buy all the other cards and cables. Thanks for the advice...Scott Okay even more confused, did you try to even install stx on bare metal with these cards? I also have Intel based cards and they work fine with the default bootimage.iso ... so What card do you have ? [ 1.391014] e1000e: Intel(R) PRO/1000 Network Driver - 3.6.0-NAPI [ 1.391014] e1000e: Copyright(c) 1999 - 2019 Intel Corporation. [ 1.523964] Intel(R) 10GbE PCI Express Linux Network Driver - version 5.6.5 [ 1.526589] Copyright(c) 1999 - 2019 Intel Corporation. [ 1.538384] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.6.0-k [ 1.538384] igb: Copyright (c) 2007-2014 Intel Corporation. [ 1.587381] igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection [ 1.701425] e1000e 0000:00:1f.6 eth1000: Intel(R) PRO/1000 Network Connection [ 1.712179] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.14.13 [ 1.714280] i40e: Copyright(c) 2013 - 2020 Intel Corporation. [ 1.718167] iavf: Intel(R) Ethernet Adaptive Virtual Function Network Driver - version 4.0.1 [ 1.720336] Copyright (c) 2013, Intel Corporation. [ 1.729075] ice: Intel(R) Ethernet Connection E800 Series Linux Driver - version 1.2.1 [ 1.731263] ice: Copyright (C) 2018-2019, Intel Corporation. [ 1.785127] ixgbe 0000:02:00.0 eth1000: Intel(R) 10 Gigabit Network Connection [ 2.066041] ixgbe 0000:02:00.1 eth1001: Intel(R) 10 Gigabit Network Connection [ 2.071970] ixgbevf: Intel(R) 10GbE PCI Express Virtual Function Driver - version 4.6.3 On Mon, Jun 28, 2021 at 10:08 PM, Embedded Devel wrote: On Tuesday 29 June 2021 06:55:46 AM (+07:00), Scott Peal wrote: Hello everyone, Does anyone know someone who can install StarlingX with OpenStack/K8m remotely at a fair price? I have tried multiple times but not getting anywhere. Stuck on figuring out how to install the NIC drivers and set up the VLANs correctly. its really pretty straight forward to install, not sure what you mean by "install the NIC drivers", unless they are "unsupported" cards, then its alot of work for building a custom image. so what cards are they that require "installing NIC drivers" ? or are you guessing really ? Servers: - 2 controllers - 2 Rook/Ceph storage - 8 hosts/workers - Dell PowerEdge C6100 X5650 dual Xeon hex-cores Networking: - Dual 10GbE NICs in each server - Dual 1GbE NICs in each server - Dual Dell S6010 switches running FTOS 9.x I am a one man band so not a lot of funds, but getting someone to show me the ropes would be worth it. Regards, Scott -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Tue Jun 29 10:04:00 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 29 Jun 2021 10:04:00 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210629T005342Z Message-ID: Sanity Test from 2021-June-29 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210629T005342Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210629T005342Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 90 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Greg.Waines at windriver.com Tue Jun 29 11:25:58 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 29 Jun 2021 11:25:58 +0000 Subject: [Starlingx-discuss] stx-openstack application applying failed In-Reply-To: <1624882444992.3028693704.3554689924@optimcloud.com> References: <1624882444992.3028693704.3554689924@optimcloud.com> Message-ID: when I cut & paste your output into grep rootfs | awk ‘{print $4}’ I get /dev/disk/by-path/pci-0000:00:17.0-ata-1.0 the command is basically setting ROOT_DISK to rootfs_device Greg. From: Embedded Devel Sent: Monday, June 28, 2021 8:15 AM To: Waines, Greg ; open infra ; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io; Camp, MaryX Subject: Re: RE: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] On Monday 28 June 2021 19:06:17 PM (+07:00), Waines, Greg wrote: Here is what you need to do … we will update this in docs. Greg. # Increase size of cgts-vg LVG in order to increase size of docker fs export NODE=controller-1 ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') NEW_SIZE=35 NEW_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NEW_SIZE}) NEW_PARTITION_UUID=$(echo ${NEW_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') system host-pv-add ${NODE} cgts-vg ${NEW_PARTITION_UUID} system host-fs-modify controller-1 docker=60 ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') fails on my stx 5.0 simplex [sysadmin at controller-0 ~(keystone_admin)]$ system host-show ${NODE} +-----------------------+----------------------------------------------------------------------+ | Property | Value | +-----------------------+----------------------------------------------------------------------+ | action | none | | administrative | unlocked | | availability | available | | bm_ip | None | | bm_type | none | | bm_username | None | | boot_device | /dev/disk/by-path/pci-0000:00:17.0-ata-1.0 | | capabilities | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} | | clock_synchronization | ntp | | config_applied | 65c1c5ac-546a-45fc-8d82-e9644f1930a2 | | config_status | None | | config_target | 65c1c5ac-546a-45fc-8d82-e9644f1930a2 | | console | tty0 | | created_at | 2021-06-26T10:51:25.595104+00:00 | | device_image_update | None | | hostname | controller-0 | | id | 1 | | install_output | text | | install_state | None | | install_state_info | None | | inv_state | inventoried | | invprovision | provisioned | | location | {} | | mgmt_ip | 192.168.204.2 | | mgmt_mac | 00:00:00:00:00:00 | | operational | enabled | | personality | controller | | reboot_needed | False | | reserved | False | | rootfs_device | /dev/disk/by-path/pci-0000:00:17.0-ata-1.0 | | serialid | None | | software_load | 21.05 | | subfunction_avail | available | | subfunction_oper | enabled | | subfunctions | controller,worker | | task | | | tboot | false | | ttys_dcd | None | | updated_at | 2021-06-28T12:13:20.119581+00:00 | | uptime | 15240 | | uuid | 2b237d4f-fc3d-4f83-bdf2-b2689469b89e | | vim_progress_status | services-enabled | +-----------------------+----------------------------------------------------------------------+ From: Embedded Devel > Sent: Sunday, June 27, 2021 7:18 AM To: Waines, Greg >; open infra >; Zvonar, Bill > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] On Saturday 26 June 2021 20:17:36 PM (+07:00), Waines, Greg wrote: Agreed that increased docker fs is not documented well … especially if you need to increase the cgts_vg logical volume group in order to increase the docker filesystem size. We have plans to fix this. Yupp seems this is exactly what i need also right now, as im running into the system host-fs-modify controller-0 docker=60 HostFs update failed: Not enough free space on cgts-vg. Current free space 16 GiB, requested total increase 30 GiB Greg. From: open infra > Sent: Friday, June 25, 2021 10:30 AM To: Zvonar, Bill > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Finally managed to deploy OpenStack but not sure what caused the issue. Increased disk capacity for docker in worker and reviewed network. On Thu, Jun 24, 2021 at 1:59 PM open infra > wrote: I managed to deploy stx-monitoring that require labelling only in controller nodes. Definitely something wrong with worker-0 labelling On Wed, Jun 23, 2021 at 8:52 PM open infra > wrote: Here is more information about the issue. http://paste.openstack.org/show/806872/ Then I set the openstack-compute-node label to the controller-0 and re-apply stx-openstack (just to test). Then stx-openstack applying progress continued up to 55%. I can lock/unlcok the worker-0 via controller nodes. So, it should not be a problem with management network. On Mon, Jun 21, 2021 at 4:40 PM open infra > wrote: thank you Bill and Thiago. Now I have switched to Release 5. Don't we need to set following labels for release 5 deployment if we supposed to deploy stx-openstack? for controllers: system host-label-assign $NODE openstack-control-plane=enabled For worker nodes: system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled Because these labels are not visible in the release 5 installation guide. On Wed, May 26, 2021 at 9:11 PM Zvonar, Bill > wrote: Hi again Danishka, we discussed this too. It was suggested that you check /var/logs/armada to see if there are any Armada startup logs that’d help understand what’s going on. Thanks, Bill... From: open infra > Sent: Saturday, May 22, 2021 2:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-openstack application applying failed [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, I have deployed StarlingX R4 (bare metal dedicated storage installation). stx-openstack application applying was failed. When I retrieve openstack pods, I can see the status osh-openstack-garbd-garbd-7d4957d9f4-kz95v is pending. I have re-uploaded stx-openstack but the same results. I highly appreciate it if someone can help to fix resolve this matter as we have a demo next week. More details available here. describe pod osh-openstack-garbd-garbd http://paste.openstack.org/show/805587/ describe nodes http://paste.openstack.org/show/805589/ Regards, Danishka -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Tue Jun 29 12:01:19 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 29 Jun 2021 17:31:19 +0530 Subject: [Starlingx-discuss] [docs] Number of worker nodes Message-ID: Hi, What's the maximum limit for worker nodes? Is it 100 or 200? What is the key resources avoiding us to use 200+ worker nodes with low latency? Btw, I found two links talk about Standard controller storage deployments with different limitations/values for worker nodes. "The Standard with Controller Storage deployment option provides two high availability (HA) controller nodes and a pool of up to* 10 worker nodes*." https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/controller_storage.html "This deployment configuration consists of a two node HA controller+storage cluster managing up to *200 worker nodes*. The limit on the size of the worker node pool is due to the performance and latency characteristics of the small integrated Ceph cluster on the controller+storage nodes." https://docs.starlingx.io/deploy/deployment-and-configuration-options-standard-configuration-with-controller-storage.html Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian_peal at yahoo.com Tue Jun 29 13:01:06 2021 From: brian_peal at yahoo.com (Scott Peal) Date: Tue, 29 Jun 2021 13:01:06 +0000 (UTC) Subject: [Starlingx-discuss] Installation services? In-Reply-To: <1624941395542.10351172.1945250469@optimcloud.com> References: <670986439.3265291.1624939337384@mail.yahoo.com> <1624941395542.10351172.1945250469@optimcloud.com> Message-ID: <313861098.5900431.1624971666366@mail.yahoo.com> The card is a Supermicro AOC-STGN-I2S with Intel 82599 chip. According to the StarlingX docs, this Intel chip is supported. After installing the initial USB iso, the interfaces are showing as no carrier. I have tested this card in a Windows 10 PC with these cables to these switch ports and they work and obtain DHCP addresses. Here is the statuses after running the controller script. 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host       valid_lft forever preferred_lft forever2: enp4s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether 0c:c4:7a:59:a2:38 brd ff:ff:ff:ff:ff:ff3: eno1: mtu 1500 qdisc mq state UP group default qlen 1000    link/ether 00:26:6c:f0:4b:70 brd ff:ff:ff:ff:ff:ff    inet 192.168.0.26/24 brd 192.168.0.255 scope global dynamic eno1       valid_lft 65469sec preferred_lft 65469sec    inet6 fe80::226:6cff:fef0:4b70/64 scope link       valid_lft forever preferred_lft forever4: enp4s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether 0c:c4:7a:59:a2:39 brd ff:ff:ff:ff:ff:ff5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000    link/ether 00:26:6c:f0:4b:71 brd ff:ff:ff:ff:ff:ff6: docker0: mtu 1500 qdisc noqueue state DOWN group default    link/ether 02:42:52:e3:3f:c9 brd ff:ff:ff:ff:ff:ff    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0       valid_lft forever preferred_lft forever According to the NIC card manual:To install the driver to a Linux system do the following:     Build a Binary RPM Package         1. Run ‘rpmbuild -tb ’         2. Replace with the specific filename of the driver Thanks again for the advice. On Tuesday, June 29, 2021, 12:38:42 AM EDT, Embedded Devel wrote: On Tuesday 29 June 2021 11:02:17 AM (+07:00), Scott Peal wrote: The Intel driver instructions needs rpmbuild (per link in first post). I am not able to get "sudo yum install rpm-build" to work. It is going to an "http://container:8080/......" url. However, dns is not resolving for container. So I ran the container creation script using the IP of the 1G NIC. Running the install for rpm-build now says I don't have permissions. Note, ping to google.com resolves and works fine. First question is, do we configure the physical NICs before we run container scripts or after? If after, then how do I get permissions to install rpm-build? If before, how do I get the repo to point to Centos (I tried this too with no luck). Note, I am running the simplex all-in-one bare metal install. I need to test the NIC in one server before I buy all the other cards and cables. Thanks for the advice...Scott Okay even more confused, did you try to even install stx on bare metal with these cards? I also have Intel based cards and they work fine with the default bootimage.iso ... so What card do you have ? [    1.391014] e1000e: Intel(R) PRO/1000 Network Driver - 3.6.0-NAPI[    1.391014] e1000e: Copyright(c) 1999 - 2019 Intel Corporation.[    1.523964] Intel(R) 10GbE PCI Express Linux Network Driver - version 5.6.5[    1.526589] Copyright(c) 1999 - 2019 Intel Corporation.[    1.538384] igb: Intel(R) Gigabit Ethernet Network Driver - version 5.6.0-k[    1.538384] igb: Copyright (c) 2007-2014 Intel Corporation.[    1.587381] igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection[    1.701425] e1000e 0000:00:1f.6 eth1000: Intel(R) PRO/1000 Network Connection[    1.712179] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.14.13[    1.714280] i40e: Copyright(c) 2013 - 2020 Intel Corporation.[    1.718167] iavf: Intel(R) Ethernet Adaptive Virtual Function Network Driver - version 4.0.1[    1.720336] Copyright (c) 2013, Intel Corporation.[    1.729075] ice: Intel(R) Ethernet Connection E800 Series Linux Driver - version 1.2.1[    1.731263] ice: Copyright (C) 2018-2019, Intel Corporation.[    1.785127] ixgbe 0000:02:00.0 eth1000: Intel(R) 10 Gigabit Network Connection[    2.066041] ixgbe 0000:02:00.1 eth1001: Intel(R) 10 Gigabit Network Connection[    2.071970] ixgbevf: Intel(R) 10GbE PCI Express Virtual Function Driver - version 4.6.3 On Mon, Jun 28, 2021 at 10:08 PM, Embedded Devel wrote: On Tuesday 29 June 2021 06:55:46 AM (+07:00), Scott Peal wrote: Hello everyone, Does anyone know someone who can install StarlingX with OpenStack/K8m remotely at a fair price? I have tried multiple times but not getting anywhere. Stuck on figuring out how to install the NIC drivers and set up the VLANs correctly. its really pretty straight forward to install, not sure what you mean by "install the NIC drivers", unless they are "unsupported" cards, then its alot of work for building a custom image.so what cards are they that require "installing NIC drivers" ? or are you guessing really ? Servers:- 2 controllers- 2 Rook/Ceph storage- 8 hosts/workers- Dell PowerEdge C6100 X5650 dual Xeon hex-cores  Networking:- Dual 10GbE NICs in each server- Dual 1GbE NICs in each server- Dual Dell S6010 switches running FTOS 9.x I am a one man band so not a lot of funds, but getting someone to show me the ropes would be worth it.  Regards, Scott --  Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com --  Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Jun 29 14:07:22 2021 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 29 Jun 2021 14:07:22 +0000 Subject: [Starlingx-discuss] No StarlingX build meeting today Message-ID: FYI - the bi-weekly meeting for builds is cancelled this week. Frank, PL for StarlingX Build project -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Tue Jun 29 14:12:20 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Tue, 29 Jun 2021 09:12:20 -0500 Subject: [Starlingx-discuss] October 2021 PTG Dates & Registration Message-ID: <05081016-72E3-4BC6-A21C-4366BAA22EFC@openstack.org> Hi everyone, We're happy to announce the next virtual PTG[1] will take place October 18-22, 2021! Registration is now open[2]. The virtual PTG is free to attend, but make sure to register so you recieve important communications like schedules, passwords, and other relevant updates. Next week, keep an eye out for info regarding team sign-ups. Can't wait to see you all there! Ashlee [1] https://www.openstack.org/ptg/ [2] https://openinfra-ptg.eventbrite.com From openinfradn at gmail.com Tue Jun 29 14:55:18 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 29 Jun 2021 20:25:18 +0530 Subject: [Starlingx-discuss] Shared storage between sub-clouds Message-ID: Hi, Is it possible to provision two sub-clouds with a shared storage? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Jun 29 18:04:21 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 29 Jun 2021 14:04:21 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 1788 - Failure! Message-ID: <1713290069.245.1624989866124.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 1788 Status: Failure Timestamp: 20210629T175428Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210629T173415Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210629T173415Z DOCKER_BUILD_ID: jenkins-master-distro-20210629T173415Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210629T173415Z/logs BUILD_IMG: false FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210629T173415Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro BUILD_ISO: false From build.starlingx at gmail.com Tue Jun 29 18:04:27 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 29 Jun 2021 14:04:27 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_distro_master_master - Build # 542 - Failure! Message-ID: <328426013.248.1624989867836.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 542 Status: Failure Timestamp: 20210629T173415Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210629T173415Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From scott.little at windriver.com Tue Jun 29 18:59:00 2021 From: scott.little at windriver.com (Scott Little) Date: Tue, 29 Jun 2021 14:59:00 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_distro_master_master - Build # 542 - Failure! In-Reply-To: <328426013.248.1624989867836.JavaMail.javamailuser@localhost> References: <328426013.248.1624989867836.JavaMail.javamailuser@localhost> Message-ID: Mihnea, Please revert your containerd change, it has broken the build.  Logs are here... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210629T173415Z/logs/std/failed-packages/containerd-1.4.6-11.tis/ CHANGELOG: ... ./cgcs-root/stx/integ 4c682e9c434db74f616a039d6ab4415f36fedc03  2021-06-25 18:53:56 +0300   Mihnea Saracin  Mihnea.Saracin at windriver.com    Update containerd to 1.4.6 ... On 2021-06-29 2:04 p.m., build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_layer_distro_master_master > Build #: 542 > Status: Failure > Timestamp: 20210629T173415Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210629T173415Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Jun 29 18:59:11 2021 From: scott.little at windriver.com (Scott Little) Date: Tue, 29 Jun 2021 14:59:11 -0400 Subject: [Starlingx-discuss] Slow repo sync times and the yocto kernel In-Reply-To: <15da574c-7b55-3f2c-0776-a0c7b14494f8@windriver.com> References: <1d1828bf-2aa4-f100-42ee-3d11d14f5af8@windriver.com> <15da574c-7b55-3f2c-0776-a0c7b14494f8@windriver.com> Message-ID: Concerns have been raised with the security implications of the git protocol.  We really should return to https if at all possible. Options: 1)  Use no-clone-bundle, as in     repo sync -j20 --no-clone-bundle However this hurts performance and transfers a significant load onto the git servers. Slightly better would be ...     repo sync -j20 || repo sync -j20 --no-clone-bundle Where the first commands is efficient, but fails on yocto, followed by the second that handles yocto. 2) Return to downloading the full history of linux-yocto.  It eats an hour on the initial clone, but after that it's not too bad. 3) Set up a mirror of linux-yocto on a more powerful hosting site.   git hub? 4) Switch to tarball snapshots of linux-yocto. Scott On 2021-06-28 10:54 a.m., Scott Little wrote: > I have posted a better fix for review ... > > https://review.opendev.org/c/starlingx/manifest/+/798340 > > Scott > > > On 2021-06-28 10:51 a.m., Scott Little wrote: >> The manifest change to implement option 2 has been merged.  However a >> lot of folks are now having 'repo sync' issues with linux-yocto. >> Strangely it doesn't hit the first repo sync, but all subsequent repo >> sync's are affected. >> >> If you are seeing an error from repo sync, please use this work >> around ... >> >> repo init ... >> rm -rf .repo/project-objects/linux-yocto.git* >> .repo/projects/cgcs-root/stx/git/linux-yocto-* >> repo sync ... >> >> I've reported the bug upsteam... >> https://bugs.chromium.org/p/gerrit/issues/detail?id=14700&q=tyranscooter&can=2 >> ... if you have a gmail account and are willing to do so, add a star >> to increase visibility of this strange error. >> >> I'll also be looking for a better fix on our end. >> >> Scott >> >> >> >> On 2021-06-21 4:10 p.m., Scott Little wrote: >>> Hi all >>> >>> The yocto kernel git was added to the StarlingX manifests late last >>> week.  Since then I've heard a lot of grumbling about slow 'repo >>> sync' times.  It affects folks setting up a new distro or monolithic >>> workspace for the first time.  The repo-sync time can exceed an hour >>> as the entire history of the linux kernel is downloaded.  You will >>> also notice an additional 5.5 GB of storage consumed to hold all >>> this history.  Subsequent repo sync's should be fast. >>> >>> So the question is... what if anything do we do about it? >>> >>> Our options... >>> >>> 1) Leave it as is. >>> >>> Hope that folks are mostly working in the 'flock' or 'container' >>> layers, and NOT using monolithic builds, and so the number of folk >>> impacted is low.   Folk working at the distro layer or using >>> monolithic can work on something else, or go to lunch, while they >>> wait for the initial repo sync to complete. >>> >>> >>> 2) Try to minimize the amount of kernel history we download through >>> a manifest change.  Limiting the git history depth does the trick ... >>> >>>    >>>   >> clone-depth="100" upstream="v5.10/standard/intel-x86" >>> revision="refs/tags/v5.10.30" name="linux-yocto" >>> path="cgcs-root/stx/git/linux-yocto-std"/> >>>   >> clone-depth="100" upstream="v5.10/standard/preempt-rt/intel-x86" >>> revision="2112f10d3d0b558c9ece3ab562c41b7f6d89cff4" >>> name="linux-yocto.git" path="cgcs-root/stx/git/linux-yocto-rt"/> >>> >>> The good ... >>> >>>  - repo sync time drops from ~1 hr to ~5 min >>> >>>  - storage drops from ~5.5 GB to ~ 5GB >>> >>> The bad ... >>> >>> - This is fragile.  It assumes that the desired rt sha can be >>> reached from 100 commits from head of branch.  However, the >>> connection to the upstream git server drops if we ask for much more >>> than that. e.g. depth=500 is a guaranteed fail.  So upstream adds a >>> few patches and we might start failing our repo-sync. >>> >>> - The history is incomplete, this may hinder kernel developers. A >>> 'git fetch linux-yocto' should pull in the rest of the history, so >>> probably not a blocker. >>> >>> >>> 3) We could double the number of manifests at each layer. One would >>> only pull in the minimal kernel history, and other the full history. >>> >>> >>> 4) Create a mirror an a larger git server, like github, and hope >>> that significantly improves the download speed. >>> >>> >>> 5) Download a tarball of the yocto kernel, rather than pulling in >>> it's git tree.  Yocto's git server doesn't seem to be set up to >>> serve custom tarballs based on a requested sha.  We would have to >>> set it all up manually, and it's not remotely convenient to kernel >>> developers. >>> >>> >>> Of these options, I'm leaning toward option 2, but look forward to >>> hearing from the community. >>> >>> >>> Scott >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Jun 29 23:50:18 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 29 Jun 2021 19:50:18 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 154 - Failure! Message-ID: <15627785.254.1625010618939.JavaMail.javamailuser@localhost> Project: STX_build_layer_containers_master_master Build #: 154 Status: Failure Timestamp: 20210629T232513Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210629T232513Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false From build.starlingx at gmail.com Wed Jun 30 04:53:45 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 30 Jun 2021 00:53:45 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1292 - Failure! Message-ID: <1979952878.258.1625028826279.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1292 Status: Failure Timestamp: 20210630T044548Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210630T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210630T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210630T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210630T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed Jun 30 04:53:47 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 30 Jun 2021 00:53:47 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 969 - Failure! Message-ID: <483711891.261.1625028828292.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 969 Status: Failure Timestamp: 20210630T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210630T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From alexandru.dimofte at intel.com Wed Jun 30 12:08:56 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 30 Jun 2021 12:08:56 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210629T220113Z Message-ID: Sanity Test from 2021-June-30 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210629T220113Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210629T220113Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 71 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 83 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Bill.Zvonar at windriver.com Wed Jun 30 12:34:11 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 30 Jun 2021 12:34:11 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 30, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210630T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Greg.Waines at windriver.com Wed Jun 30 12:47:08 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 30 Jun 2021 12:47:08 +0000 Subject: [Starlingx-discuss] Shared storage between sub-clouds In-Reply-To: References: Message-ID: You could configure the same external netapp storage backend for each subcloud. https://docs.starlingx.io/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.html Greg. From: open infra Sent: Tuesday, June 29, 2021 10:55 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Shared storage between sub-clouds [Please note: This e-mail is from an EXTERNAL e-mail address] Hi, Is it possible to provision two sub-clouds with a shared storage? Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed Jun 30 12:47:04 2021 From: amy at demarco.com (Amy Marrich) Date: Wed, 30 Jun 2021 07:47:04 -0500 Subject: [Starlingx-discuss] [Diversity] Diversity and Inclusion July Meeting Update Message-ID: Due to most US companies having July 5th off, we will be moving the Jully D&I WG meeting Please join us Monday, July 19th at 17:00 UTC in the #openinfra-diversity channel on OFTC. The agenda can be found at https://etherpad.openstack.org/p/ diversity-wg-agenda. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Wed Jun 30 12:57:50 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 30 Jun 2021 18:27:50 +0530 Subject: [Starlingx-discuss] Shared storage between sub-clouds In-Reply-To: References: Message-ID: Thanks Greg. On Wed, Jun 30, 2021 at 6:17 PM Waines, Greg wrote: > You could configure the same external netapp storage backend for each > subcloud. > > > https://docs.starlingx.io/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.html > > > > Greg. > > > > *From:* open infra > *Sent:* Tuesday, June 29, 2021 10:55 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Shared storage between sub-clouds > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi, > > > > Is it possible to provision two sub-clouds with a shared storage? > > > > Regards, > > Danishka > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Jun 30 14:35:41 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 30 Jun 2021 09:35:41 -0500 Subject: [Starlingx-discuss] Stop recording the meetings In-Reply-To: References: Message-ID: <1A210C2E-5A39-4369-BB25-5AD5CCE45E7F@gmail.com> Hi, We’ve discussed this briefly on the Community Call today and there were no objections to this approach. The plan we outlined is the following: * Stop recording meetings next Monday (July 5) * Mark the current recordings page archived and point to the meeting notes If you have any comments or objections please reply to this thread or reach out to me __before the end of this week (July 4)__. Thanks, Ildikó > On Jun 10, 2021, at 13:14, Ildiko Vancsa wrote: > > Hi, > > I’m reaching out to you about the StarlingX meeting recordings. > > If you take a look at the meeting wiki[1] you will see that the most recent links to recordings are over a year old. During this time I haven’t received any requests or complaints until very recently. But this recent outreach was also about to check on the recordings in general just to understand if the meetings are still happening or not and not to listen back on either of them. > > Following the mailing list you can also see that most teams are posting their meeting logs that are usually on their meeting etherpads which gives everyone a chance to catch up on what was discussed and is a primary way to keep meeting history. > > Based on the above I would like to propose to stop recording the meetings. > > Please respond to this thread by the end of next week (June 20) if you have any questions or concerns to take into account before taking action. > > Thanks and Best Regards, > Ildikó > > [1] https://wiki.openstack.org/wiki/Starlingx/Meetings > > From Bill.Zvonar at windriver.com Wed Jun 30 15:37:26 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 30 Jun 2021 15:37:26 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 30, 2021) In-Reply-To: References: Message-ID: >From today's call... * Standing Topics * Build/Sanity * sanity all green since last week * Scott's looking into something re: builds - there was a download failure with a PTP src rpm, Scott found the issue & is addressing it * Gerrit Reviews in Need of Attention * nothing this week * Topics for this Week * (ildikov) Presentation slot at Edge Computing World (October 12-14) * https://www.edgecomputingworld.com * ~20-minute long slots * Ildiko said she needs to get back to the organizers in a week or so * (ildikov) Stop meeting recordings - http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011583.html * Ildiko to update the meeting recording page to mark it archived and ensure it is clear that we only stopped the recordings and the agenda and notes are available on the meeting etherpads * Ildiko to send out a follow up email about moving into action next Monday (July 5) * Slow repo sync times and the yocto kernel * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011693.html * Bill to ask the OS team (Linda, Mark) to comment on options * ARs from Previous Meetings * nothing this week * Open Requests for Help * Connectivity between worker and data network * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011655.html * Danishka was on the call, he & Greg discussed, Danishka will provide more info * Download images and push FAIL * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011665.html * Bill to ask them to try to pull those containers individually to see if it's a connectivity issue * Number of worker nodes * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011685.html * Greg responded, Danishka was on the call - the Docs should be updated to be consistent on 200 worker nodes * Build Matters (if required) * nothing this week -----Original Message----- From: Zvonar, Bill Sent: Wednesday, June 30, 2021 8:34 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community (& TSC) Call (June 30, 2021) Hi all, reminder of the weekly TSC/Community coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210630T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Bill.Zvonar at windriver.com Wed Jun 30 15:44:02 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 30 Jun 2021 15:44:02 +0000 Subject: [Starlingx-discuss] Download images and push FAIL In-Reply-To: <1624721547578.196637701.2558704178@optimcloud.com> References: <1624721547578.196637701.2558704178@optimcloud.com> Message-ID: Hi there - we discussed this on the Community Call today, and it was suggested to try to download the images individually to see if that works. The suspicion is that there may have been a connectivity issue causing some of your images to fail to download. Thanks, Bill... -----Original Message----- From: Embedded Devel Sent: Saturday, June 26, 2021 11:37 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Download images and push FAIL [Please note: This e-mail is from an EXTERNAL e-mail address] stx 5.0 simplex on bare-metal seems some images "tiller" not found. TASK [common/push-docker-images : Download images and push to local registry] ********************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 1, "stderr": "Traceback (most recent call last):\n File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1624706034.05-101836802298463/download_images.py\", line 144, in \n raise Exception(\"Failed to download images %s\" % failed_downloads)\nException: Failed to download images ['k8s.gcr.io/kube-proxy:v1.18.1', 'gcr.io/kubernetes-helm/tiller:v2.16.1']\n", "stderr_lines": ["Traceback (most recent call last):", " File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1624706034.05-101836802298463/download_images.py\", line 144, in ", " raise Exception(\"Failed to download images %s\" % failed_downloads)", "Exception: Failed to download images ['k8s.gcr.io/kube-proxy:v1.18.1', 'gcr.io/kubernetes-helm/tiller:v2.16.1']"], "stdout": "Image is up to date for sha256:a595af0107f98768274e9143be61c7c80a8df2505ced520c9160f4e16ed42cd1\nImage is up to date for sha256:d1ccdd18e6ed8d91e3754e90c4b6cee42750ba165c75d3c78b4a31f057dd0423\nImage is up to date for sha256:6c9320041a7b5d00da54dda3a6bf9d6983b432ca5245a8254c83fc694d023810\nImage is up to date for sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c\nImage is up to date for sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f\nImage is up to date for sha256:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5\nImage is up to date for sha256:cb6799752c46cb16c0c5bebcb355e988a1cd1b3745ce8b67e004abe9a81340a8\nImage is up to date for sha256:fc05bc4225f39e81dcbd0035457977276ed8a6054e6bde6406e39b672e9724f5\nImage is up to date for sha256:53aa421faf0acd88f8a4cb113e9db2cc65b2a2954640ed96a56c4b94233674d8\nImage is up to date for sha256:98793d0a88c823c4fc0fb1b3833d12932be270fc4b6d62bc181f0f54413fe12d\nImage is up to date for sha256:7cf8e2d1b7337338a3f977d07abdf63d80d5005458dfab3ab8962c2bab99d40d\nImage is up to date for sha256:f2a1744e620d3bf673f8351dcfaa5334fe4888cfcd5476b0222499ccc1b158fe\nImage is up to date for sha256:a2bef2b25274b1acbdfde5e2f3de432475e15d4036b2108cea2dce968b0c29ea\nImage is up to date for sha256:3061a8a540ac0dee710c1b37edad7855581dc027a18890759c91792615505f13\nImage is up to date for sha256:fb95693fe5c67fc46893c1735d85f00f462dc7f5254f5ddbcbbe85fd4ef71717\nImage download succeeded: k8s.gcr.io/kube-apiserver:v1.18.1\nImage push succeeded: registry.local:9001/k8s.gcr.io/kube-apiserver:v1.18.1\nImage k8s.gcr.io/kube-apiserver:v1.18.1 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/kube-controller-manager:v1.18.1\nImage push succeeded: registry.local:9001/k8s.gcr.io/kube-controller-manager:v1.18.1\nImage k8s.gcr.io/kube-controller-manager:v1.18.1 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/kube-scheduler:v1.18.1\nImage push succeeded: registry.local:9001/k8s.gcr.io/kube-scheduler:v1.18.1\nImage k8s.gcr.io/kube-scheduler:v1.18.1 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/kube-proxy:v1.18.1\n Image download failed: k8s.gcr.io/kube-proxy:v1.18.1404 Client Error: Not Found (\"No such image: k8s.gcr.io/kube-proxy:v1.18.1\")\nImage download succeeded: k8s.gcr.io/pause:3.2\nImage push succeeded: registry.local:9001/k8s.gcr.io/pause:3.2\nImage k8s.gcr.io/pause:3.2 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/etcd:3.4.3-0\nImage push succeeded: registry.local:9001/k8s.gcr.io/etcd:3.4.3-0\nImage k8s.gcr.io/etcd:3.4.3-0 download succeeded by containerd\nImage download succeeded: k8s.gcr.io/coredns:1.6.7\nImage push succeeded: registry.local:9001/k8s.gcr.io/coredns:1.6.7\nImage k8s.gcr.io/coredns:1.6.7 download succeeded by containerd\n Image download failed: quay.io/calico/cni:v3.12.0500 Server Error: Internal Server Error (\"Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\")\nSleep 20s before retry downloading image quay.io/calico/cni:v3.12.0 ...\nImage download succeeded: quay.io/calico/cni:v3.12.0\nImage push succeeded: registry.local:9001/quay.io/calico/cni:v3.12.0\nImage quay.io/calico/cni:v3.12.0 download succeeded by containerd\nImage download succeeded: quay.io/calico/node:v3.12.0\nImage push succeeded: registry.local:9001/quay.io/calico/node:v3.12.0\nImage quay.io/calico/node:v3.12.0 download succeeded by containerd\nImage download succeeded: quay.io/calico/kube-controllers:v3.12.0\nImage push succeeded: registry.local:9001/quay.io/calico/kube-controllers:v3.12.0\nImage quay.io/calico/kube-controllers:v3.12.0 download succeeded by containerd\nImage download succeeded: quay.io/calico/pod2daemon-flexvol:v3.12.0\nImage push succeeded: registry.local:9001/quay.io/calico/pod2daemon-flexvol:v3.12.0\nImage quay.io/calico/pod2daemon-flexvol:v3.12.0 download succeeded by containerd\nImage download succeeded: docker.io/nfvpe/multus:v3.4\nImage push succeeded: registry.local:9001/docker.io/nfvpe/multus:v3.4\nImage docker.io/nfvpe/multus:v3.4 download succeeded by containerd\nImage download succeeded: docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8\nImage push succeeded: registry.local:9001/docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8\nImage docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8 download succeeded by containerd\nImage download succeeded: docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae\nImage push succeeded: registry.local:9001/docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae\nImage docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae download succeeded by containerd\nImage download succeeded: gcr.io/kubernetes-helm/tiller:v2.16.1\n Image download failed: gcr.io/kubernetes-helm/tiller:v2.16.1404 Client Error: Not Found (\"No such image: gcr.io/kubernetes-helm/tiller:v2.16.1\")\nImage download succeeded: quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic\nImage push succeeded: registry.local:9001/quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic\nImage quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic download succeeded by containerd\nImage download succeeded: docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0\nImage push succeeded: registry.local:9001/docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0\nImage docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0 doImage is up to date for sha256:7fb3c2364b87e9241db7549bf11d42c129130f44e930d1ce36523fc693186e89\nImage is up to date for sha256:d4553944fbf7b50f20eece0ec3f638202fdbe2a1a597af8c3f3823201cc695b3\nwnload succeeded by containerd\nImage download succeeded: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1\nImage push succeeded: registry.local:9001/quay.io/stackanetes/kubernetes-entrypoint:v0.3.1\nImage quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 download succeeded by containerd\nImage download succeeded: quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\nImage push succeeded: registry.local:9001/quay.io/k8scsi/snapshot-controller:v2.0.0-rc2\nImage quay.io/k8scsi/snapshot-controller:v2.0.0-rc2 download succeeded by containerd\n", "stdout_lines": ["Image is up to date for sha256:a595af0107f98768274e9143be61c7c80a8df2505ced520c9160f4e16ed42cd1", "Image is up to date for sha256:d1ccdd18e6ed8d91e3754e90c4b6cee42750ba165c75d3c78b4a31f057dd0423", "Image is up to date for sha256:6c9320041a7b5d00da54dda3a6bf9d6983b432ca5245a8254c83fc694d023810", "Image is up to date for sha256:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c", "Image is up to date for sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f", "Image is up to date for sha256:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5", "Image is up to date for sha256:cb6799752c46cb16c0c5bebcb355e988a1cd1b3745ce8b67e004abe9a81340a8", "Image is up to date for sha256:fc05bc4225f39e81dcbd0035457977276ed8a6054e6bde6406e39b672e9724f5", "Image is up to date for sha256:53aa421faf0acd88f8a4cb113e9db2cc65b2a2954640ed96a56c4b94233674d8", "Image is up to date for sha256:98793d0a88c823c4fc0fb1b3833d12932be270fc4b6d62bc181f0f54413fe12d", "Image is up to date for sha256:7cf8e2d1b7337338a3f977d07abdf63d80d5005458dfab3ab8962c2bab99d40d", "Image is up to date for sha256:f2a1744e620d3bf673f8351dcfaa5334fe4888cfcd5476b0222499ccc1b158fe", "Image is up to date for sha256:a2bef2b25274b1acbdfde5e2f3de432475e15d4036b2108cea2dce968b0c29ea", "Image is up to date for sha256:3061a8a540ac0dee710c1b37edad7855581dc027a18890759c91792615505f13", "Image is up to date for sha256:fb95693fe5c67fc46893c1735d85f00f462dc7f5254f5ddbcbbe85fd4ef71717", "Image download succeeded: k8s.gcr.io/kube-apiserver:v1.18.1", "Image push succeeded: registry.local:9001/k8s.gcr.io/kube-apiserver:v1.18.1", "Image k8s.gcr.io/kube-apiserver:v1.18.1 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/kube-controller-manager:v1.18.1", "Image push succeeded: registry.local:9001/k8s.gcr.io/kube-controller-manager:v1.18.1", "Image k8s.gcr.io/kube-controller-manager:v1.18.1 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/kube-scheduler:v1.18.1", "Image push succeeded: registry.local:9001/k8s.gcr.io/kube-scheduler:v1.18.1", "Image k8s.gcr.io/kube-scheduler:v1.18.1 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/kube-proxy:v1.18.1", " Image download failed: k8s.gcr.io/kube-proxy:v1.18.1404 Client Error: Not Found (\"No such image: k8s.gcr.io/kube-proxy:v1.18.1\")", "Image download succeeded: k8s.gcr.io/pause:3.2", "Image push succeeded: registry.local:9001/k8s.gcr.io/pause:3.2", "Image k8s.gcr.io/pause:3.2 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/etcd:3.4.3-0", "Image push succeeded: registry.local:9001/k8s.gcr.io/etcd:3.4.3-0", "Image k8s.gcr.io/etcd:3.4.3-0 download succeeded by containerd", "Image download succeeded: k8s.gcr.io/coredns:1.6.7", "Image push succeeded: registry.local:9001/k8s.gcr.io/coredns:1.6.7", "Image k8s.gcr.io/coredns:1.6.7 download succeeded by containerd", " Image download failed: quay.io/calico/cni:v3.12.0500 Server Error: Internal Server Error (\"Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\")", "Sleep 20s before retry downloading image quay.io/calico/cni:v3.12.0 ...", "Image download succeeded: quay.io/calico/cni:v3.12.0", "Image push succeeded: registry.local:9001/quay.io/calico/cni:v3.12.0", "Image quay.io/calico/cni:v3.12.0 download succeeded by containerd", "Image download succeeded: quay.io/calico/node:v3.12.0", "Image push succeeded: registry.local:9001/quay.io/calico/node:v3.12.0", "Image quay.io/calico/node:v3.12.0 download succeeded by containerd", "Image download succeeded: quay.io/calico/kube-controllers:v3.12.0", "Image push succeeded: registry.local:9001/quay.io/calico/kube-controllers:v3.12.0", "Image quay.io/calico/kube-controllers:v3.12.0 download succeeded by containerd", "Image download succeeded: quay.io/calico/pod2daemon-flexvol:v3.12.0", "Image push succeeded: registry.local:9001/quay.io/calico/pod2daemon-flexvol:v3.12.0", "Image quay.io/calico/pod2daemon-flexvol:v3.12.0 download succeeded by containerd", "Image download succeeded: docker.io/nfvpe/multus:v3.4", "Image push succeeded: registry.local:9001/docker.io/nfvpe/multus:v3.4", "Image docker.io/nfvpe/multus:v3.4 download succeeded by containerd", "Image download succeeded: docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8", "Image push succeeded: registry.local:9001/docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8", "Image docker.io/starlingx/k8s-cni-sriov:stx.5.0-v2.6-7-gb18123d8 download succeeded by containerd", "Image download succeeded: docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae", "Image push succeeded: registry.local:9001/docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae", "Image docker.io/starlingx/k8s-plugins-sriov-network-device:stx.4.0-v3.2-16-g4e0302ae download succeeded by containerd", "Image download succeeded: gcr.io/kubernetes-helm/tiller:v2.16.1", " Image download failed: gcr.io/kubernetes-helm/tiller:v2.16.1404 Client Error: Not Found (\"No such image: gcr.io/kubernetes-helm/tiller:v2.16.1\")", "Image download succeeded: quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic", "Image push succeeded: registry.local:9001/quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic", "Image quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic download succeeded by containerd", "Image download succeeded: docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0", "Image push succeeded: registry.local:9001/docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0", "Image docker.io/starlingx/n3000-opae:stx.4.0-v1.0.0 doImage is up to date for sha256:7fb3c2364b87e9241db7549bf11d42c129130f44e930d1ce36523fc693186e89", "Image is up to date for sha256:d4553944fbf7b50f20eece0ec3f638202fdbe2a1a597af8c3f3823201cc695b3", "wnload succeeded by containerd", "Image download succeeded: quay.io/stackanetes/kubernetes-entrypoint:v0.3.1", "Image push succeeded: registry.local:9001/quay.io/stackanetes/kubernetes-entrypoint:v0.3.1", "Image quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 download succeeded by containerd", "Image download succeeded: quay.io/k8scsi/snapshot-controller:v2.0.0-rc2", "Image push succeeded: registry.local:9001/quay.io/k8scsi/snapshot-controller:v2.0.0-rc2", "Image quay.io/k8scsi/snapshot-controller:v2.0.0-rc2 download succeeded by containerd"]} PLAY RECAP **************************************************************************************************************************************************** localhost : ok=187 changed=65 unreachable=0 failed=1 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Wed Jun 30 15:58:39 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 30 Jun 2021 11:58:39 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 155 - Still Failing! In-Reply-To: <1094596499.252.1625010617164.JavaMail.javamailuser@localhost> References: <1094596499.252.1625010617164.JavaMail.javamailuser@localhost> Message-ID: <594176230.265.1625068720060.JavaMail.javamailuser@localhost> Project: STX_build_layer_containers_master_master Build #: 155 Status: Still Failing Timestamp: 20210630T153233Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20210630T153233Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false From Ghada.Khalil at windriver.com Wed Jun 30 20:22:11 2021 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 30 Jun 2021 20:22:11 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - June 30/2021 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release Team Meeting - Jun 30 2021 stx.5.0 - stx.5.0 Docs - Team made excellent progress on cherrypicks to the r/stx.5.0 branch; very close to finishing - Mary will confirm the status and follow up with Scott on re-tagging - Scott already had some questions sent to Ron Stone stx.6.0 - Release Planning Spreadsheet: https://docs.google.com/spreadsheets/d/13p0BMlBgJXUVForOFsblAJq9jA1-FMBlmhV5TIc70IE/edit#gid=1107209846 - Made good progress by the PLs to provide feature plans/dates - Need a few items to follow up on - Any features missing from the list? - Frank: We may add a few prep items to stx.6.0 in support on the Debian OS transition - Should have a better view by end of July - Verification Plans for the release - The Intel test team is only focused on sanity - Need to firm up the plans for the release regression and feature testing. From maryx.camp at intel.com Wed Jun 30 21:15:15 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 30 Jun 2021 21:15:15 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 30-Jun-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 30-Jun-21 All -- reviews merged since last meeting: >50 (cherry picks) Status/questions/opens Mary will be out for the next 2 weeks (July 6-16), returning on Monday July 19. Since things are quiet now, we agreed to cancel this meeting for the next 2 weeks. AR Mary to send a meeting cancellation notice. Thursday July 1 is a holiday in Canada. Adil is leaving the team after next week. Thanks for all your efforts on the documentation, you will be missed! Mary will also send note to discuss-list about R5 retrospective feedback, for docs only. Cherry pick status - we think we are done. Ron will double-check using "git log --no-merges" (he has the detailed command line). If it's not done, we agreed to create new reviews in the R5 branch. Scott Little will re-tag the branch and we'll go on from there. Installation guide discrepancy Recommendations: Make a new review with the "Install Notice" changes for the R5 install guides in main branch. Explain why we're doing this in the commit message. AR Mary. Cherry pick that review into the R5 branch. Compare the R5 and R6 install guides to be sure they are aligned & accurate. (Beyond Compare is pretty good for this) Background Here is what I found when looking into the missing "72 hour notice" issue we discovered last week. 792961 "Install Notice" This review added the 72 hour notice to 4 install guides. Merged on 5/25. 793093 "R5 updates to landing page and installation" Merged on 5/31. This review created the R6 install guides folder in the main branch by duplicating the R5 guides at the time (including the 72 hour note!) It also moved the R1-R4 install guides into the R5-only branch. (R5 docs were not moved since they were already present in the branch. This will be important later.) 794024 "Add R5 install guides to latest branch" Merged on 6/2. Copied the R5 install guides from R5 branch back to the main branch. Because the "Install Notice" review [792961] was not cherry picked into R5 branch, that change is only present in the "latest" aka R6 install guides. You can see it if you look at this file: starlingx\docs\doc\source\deploy_install_guides\r6_release\bare_metal\aio_duplex_install_kubernetes.rst There was a 2 day period when R5 install guides only existed on the R5 branch (5/31 - 6/2). I have gone over the list of Closed reviews during that time and don't see any others that may have fallen thru the cracks like the "Install Notice" review. From maryx.camp at intel.com Wed Jun 30 21:18:00 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 30 Jun 2021 21:18:00 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Docs Team Call Message-ID: 07July21 meeting cancelled. Extending the meeting series through 29-Sep-21 - 3:30 pm Eastern - 12:30 pm Pacific - Docs Team Call Call details * https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o Passcode: 419405 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes * The agenda and notes for each call are kept here: https://etherpad.openstack.org/p/stx-documentation * Call recordings: https://wiki.openstack.org/wiki/Starlingx/Meeting_Logs#Docs_Team_Call -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4924 bytes Desc: not available URL: From maryx.camp at intel.com Wed Jun 30 21:18:27 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 30 Jun 2021 21:18:27 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Docs Team Call Message-ID: 14July21 meeting cancelled. Extending the meeting series through 29-Sep-21 - 3:30 pm Eastern - 12:30 pm Pacific - Docs Team Call Call details * https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o Passcode: 419405 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes * The agenda and notes for each call are kept here: https://etherpad.openstack.org/p/stx-documentation * Call recordings: https://wiki.openstack.org/wiki/Starlingx/Meeting_Logs#Docs_Team_Call -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4924 bytes Desc: not available URL: From maryx.camp at intel.com Wed Jun 30 21:22:48 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 30 Jun 2021 21:22:48 +0000 Subject: [Starlingx-discuss] Docs Meetings cancelled 07 July and 14 July Message-ID: The StarlingX Docs team meetings are cancelled for July 07 and July 14. For any urgent docs issues, please contact Greg Waines. thanks, Mary Camp Kelly Services Technical Writer | maryx.camp at intel.com -------------- next part -------------- An HTML attachment was scrubbed... URL: