From maria.g.perez.ibarra at intel.com Sat Jun 1 02:38:18 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Sat, 1 Jun 2019 02:38:18 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190531 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-MAY-31 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 1 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 2 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs | 2 TCs FAIL Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Cannot get prompt after login in ssh or serial https://bugs.launchpad.net/starlingx/+bug/1829941 Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openstack.org Sun Jun 2 23:52:26 2019 From: ildiko at openstack.org (Ildiko Vancsa) Date: Mon, 3 Jun 2019 01:52:26 +0200 Subject: [Starlingx-discuss] StarlingX TSC election - Nomination period ended Message-ID: Hi StarlingX Community, I would like to inform you that the nomination period[1] for the StarlingX TSC election has ended. Thank you to all candidates who submitted their nominations into this round. The election officials[2] are still finalizing some details and will come back with further updates shortly. Thank you, [1] https://docs.starlingx.io/election/ [2] https://docs.starlingx.io/election/#election-officials From Frank.Miller at windriver.com Mon Jun 3 12:47:44 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 3 Jun 2019 12:47:44 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Weekly Meeting Message-ID: Agenda for Monday June 3: 1. Review outstanding SBs and plans to close in time for MS3 milestone 2. Discussion on how to disable FM containerization in stx2.0 so it can easily be enabled once stx3.0 opens 3. Updates in high priority bugs 4. Test team status Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2429 bytes Desc: not available URL: From vm.rod25 at gmail.com Mon Jun 3 12:49:32 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 3 Jun 2019 07:49:32 -0500 Subject: [Starlingx-discuss] Build meeting May 30 notes Message-ID: Build meeting May 30 Opens : - We did have 2 failures ( mail on the ML ). This was a concurrent of docker build running. It needs more research but there is a work around on it. Mail with a description of the bug has been sent. - Build issue, it was raised and fixed quickly (thanks Scott ) - Erich took a look to the review but it was already merged. https://review.opendev.org/#/c/619631/. Please wait for a review of other team members before merger it - Python tool to improve the speed of the mirror will be under track by Abraham team - Abraham already took the AR to include it on the backlog - There was another failure due to a missing repo, there was a reference to some RPMs on that repo. There will be a launchpad for that - Victor will start the thread about layer build requirements - Erich has an old review on Gerrit pending. - https://review.opendev.org/#/c/653152/ Regards Victor Rodriguez -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Mon Jun 3 13:21:10 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 3 Jun 2019 08:21:10 -0500 Subject: [Starlingx-discuss] Performance tests for Networking In-Reply-To: <9A85D2917C58154C960D95352B22818BD074C044@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD074C044@fmsmsx123.amr.corp.intel.com> Message-ID: On Thu, May 30, 2019 at 12:16 PM Jones, Bruce E wrote: > > We’ve been having a short discussion on this topic internally and I want to push the conversation out into the open. So I’m starting a thread. > > > > Forrest has some thoughts about how to proceed, but before we get into the details, I wanted to ask – which StarlingX sub-team should be the one to lead this work? Should it be in the Networking team or in the Test team? > Hi Bruce , thanks for starting this thread Performance is a complex topic, it includes different areas such as network, storage, workload execution time and more. We have had the performance topic a bit not with the full attention it deserves (part becaus eof my fault). It has been living under test and QA subproject, but with all the changes and new incoming open framework, I am not sure there will be enough room for this big topic. My recommendation will be that we move to Networking team for now and once we have solid progress we could start a separated subproject team ( I will be more than happy to take the AR to be responsible for it ) Open for feedback Victor Rodriguez > > > brucej > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Mon Jun 3 13:58:10 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 3 Jun 2019 13:58:10 +0000 Subject: [Starlingx-discuss] Performance tests for Networking In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD074C044@fmsmsx123.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4FC7D6@ALA-MBD.corp.ad.wrs.com> Agree that the networking team should prime this ...at least until a solid proposal and implementation are in place. Looking forward to see what Forrest is proposing in terms of next steps. Ghada -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Monday, June 03, 2019 9:21 AM To: Jones, Bruce E Cc: Khalil, Ghada; Cabrales, Ada; Waheed, Numan; starlingx-discuss at lists.starlingx.io; Zhao, Forrest Subject: Re: [Starlingx-discuss] Performance tests for Networking On Thu, May 30, 2019 at 12:16 PM Jones, Bruce E wrote: > > We’ve been having a short discussion on this topic internally and I want to push the conversation out into the open. So I’m starting a thread. > > > > Forrest has some thoughts about how to proceed, but before we get into the details, I wanted to ask – which StarlingX sub-team should be the one to lead this work? Should it be in the Networking team or in the Test team? > Hi Bruce , thanks for starting this thread Performance is a complex topic, it includes different areas such as network, storage, workload execution time and more. We have had the performance topic a bit not with the full attention it deserves (part becaus eof my fault). It has been living under test and QA subproject, but with all the changes and new incoming open framework, I am not sure there will be enough room for this big topic. My recommendation will be that we move to Networking team for now and once we have solid progress we could start a separated subproject team ( I will be more than happy to take the AR to be responsible for it ) Open for feedback Victor Rodriguez > > > brucej > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Mon Jun 3 15:10:37 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 3 Jun 2019 15:10:37 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - May 30/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4FD8E1@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release meeting agenda / notes May 30 2019 stx.2.0 - Containerized Openstack services - Prime: Frank / MS-3: June 14 - Executing to the plan - Finalizing what from the stx.3.0 deferred content to allow in until stx.2.0 RC branch is created - Integrate Containerized OVS - Prime: Forrest / MS-3: June 7 - Review is posted and testing is in progress. On track - Anisble Deployment - Prime: Dariush / MS-3: June 7 - Ada confirmed that we can go ahead with disabling config_controller - Openstack Patch Elimination - Prime: Bruce / MS-3: ?? - External nova placement is now resourced. May not make June 14. We should consider handling this as an exception. - Need update on NUMA-aware migration - Openstack Rebase - Prime: Frank / MS-3: May 30 - All outstanding items are now done - wrsroot change / MS-3: June 14 (dependent on testing) - Code is stable. - Jose/Saul have done testing and things look good - Numan is expecting to finish testing/adjusting automation suite by June 12 - Testing Status - Ada: Will add more resources to Container testing to mitigate delay from stability issues. - Bugs - 132 bugs currently tagged for stx.2.0 - Most are assigned to developers - Expect more to start coming in as regression ramps up - Need teams to shift their focus on bug resolution. Intel team has some bandwidth to take on bugs. stx.3.0 - TSC agreed to start focusing on stx.3.0 content after the elections -- around June 20 - TO BE DISCUSSED - Do we create the stx.2.0 RC1 branch earlier to allow more freedom in merging stx.3.0 content? - Feedback from Curtis aligns with this recommendation - We need to make this decision when there is code lined up for stx.3.0 with no place to merge From jose.perez.carranza at intel.com Mon Jun 3 15:11:22 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Mon, 3 Jun 2019 15:11:22 +0000 Subject: [Starlingx-discuss] [Containers] node-feature-discovery In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A97C6E4@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A97C6E4@fmsmsx101.amr.corp.intel.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A97CE1E@fmsmsx101.amr.corp.intel.com> Hi Chris Just to follow up on below question, do you have any info to share about node-feature-discovery feature? Regards, José > -----Original Message----- > From: Perez Carranza, Jose > Sent: Wednesday, May 29, 2019 11:34 AM > To: Chris Friesen ; starlingx- > discuss at lists.starlingx.io > Cc: Miller, Frank > Subject: [Containers] node-feature-discovery > > Hi Chris > > I'm checking storyboard for node-feature-discovery [1] to design test scenarios > about it, for me is not clear yet how to enable that feature on my deployment, > are you able to explain more on how to do it? I also see that documentation > was provided to docs team, are you able to point me out to that > documentation so I can have more details of this feature implementation. > > 1. https://storyboard.openstack.org/#!/story/2005193 > > Regards, > José > From vm.rod25 at gmail.com Mon Jun 3 15:29:39 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 3 Jun 2019 10:29:39 -0500 Subject: [Starlingx-discuss] Multi-OS team meeting notes 6/03/19 Message-ID: --------------------------------------------------------------------------- *Multi-OS team meeting * Summary of the meeting : 6/03/19 - Multi-OS support: - Specification for directory layout: APPROVED !!! - https://review.opendev.org/#/c/634074/ - Marcela to send Debian base build scripts - WIP start sending them today - Wiki for Debian support - https://wiki.openstack.org/wiki/StarlingX/MultiOS/Debian - Start Thread to ML - Meeting with stakeholders for feedback - Open Suse build-demo - Status of the flock service build for OpenSUSE - Testing of current effort to build the block services in open Suse - The build of flock services is our main goal - Clean spec files, enable warnings and errors in the rpmlint - spec cleaner tool is also part of the criteria - The next step will be the installation - Look for a tool that compares RPMs to check if the open Suse RPMs are equal in files that the current centos version Regards Victor Rodriguez -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Jun 3 15:29:24 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 3 Jun 2019 15:29:24 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 05/30 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4FD925@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Team Meeting Agenda/Notes - May 30/2019 Networking Test Status - Feature - Proceeding with feature testing now that there are green loads - ovs-dpdk Firewall & OVS process monitoring in progress - Containerized OVS: will start when new version is available. Current fcst for code merge: June 7 - Seeing one issue w/ losing IP address on VM migration; investigating and may open a bug - Regression - Test-cases have been defined. - As per Numan, regression test-cases need to be reviewed to ensure obsolete ones are removed - Automation - Need manual steps for a number of test-cases in order to proceed with automation. Action: Numan/ChrisW to arrange a session with Elio and team. - Plan to stop automation efforts Feature Development - Containerized OVS - Patch uploaded: https://review.opendev.org/#/c/662195/1 - Testing in progress. Targeting code merge of June 7 - Multus / SRIOV CNI Plugins - Steve moved stx support to a more recent version; email sent - Chenjie will try the new version in starlingx and share his findings. This is lower priority than containerized ovs. Bugs - https://bugs.launchpad.net/starlingx/+bug/1824829 , the patch has got one +2 and needs another one to get merged. - https://bugs.launchpad.net/starlingx/+bug/1822366 , don't have the NIC Intel 82599 (Niantic) 10 G >> need to reassign - https://bugs.launchpad.net/starlingx/+bug/1829403 , need Peng to execute the commands. - https://bugs.launchpad.net/starlingx/+bug/1829390 , the same bug as https://bugs.launchpad.net/starlingx/+bug/1829403 From adrien.macor at hotmail.com Mon Jun 3 14:18:36 2019 From: adrien.macor at hotmail.com (Adrien Macor) Date: Mon, 3 Jun 2019 14:18:36 +0000 Subject: [Starlingx-discuss] Informations about StarlingX Message-ID: Hi, I'm currently doing my bachelor thesis in the School of Engineering and Architecture of Fribourg. The name of this project is "Edge cloud orchestration and monitoring" and I'm now interresting on StarlingX. I have a few question for you: 1. I read this documentation: https://docs.starlingx.io/deployment_guides/current/duplex.html StarlingX Docs: All-in-one duplex stx.2018.10 NOTE: The instructions to setup a StarlingX All-in-One Duplex (AIO-DX) with containerized openstack services in this guide are under development. For approved instructions, see the All in One Duplex Configuration wiki page. docs.starlingx.io , but something is still not clear for me: let's suppose I have three physical servers; two on the same area, and the last one elsewhere (not on the same network). May I use this infrastructure with StarlingX? If I understand correctly, as long as I have the connectivity between all my servers it's possible. 2. StarlingX enables to deploy VNF over the edge, but what's his differences with something like Vmware, another virtualization tool? 3. Last one: is it better to deploy one VNF over all my edge sites, or deploy multiple VNFs over all my edge sites, or split one VNF over all my edge... and so on. Or it depends on what the VNF is? Thanks for the reply Adriano Macor -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Jun 3 18:03:12 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 3 Jun 2019 14:03:12 -0400 Subject: [Starlingx-discuss] CENGN outage Monday June 3 Message-ID: <272816eb-87c7-d7fa-a3bf-6ec3801442a6@windriver.com> We are now experiencing a second outage at CENGN.  This one started at about 1:50 EST.  I have opened another ticket. Scott From scott.little at windriver.com Mon Jun 3 18:31:08 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 3 Jun 2019 14:31:08 -0400 Subject: [Starlingx-discuss] CENGN outage Monday June 3 In-Reply-To: <272816eb-87c7-d7fa-a3bf-6ec3801442a6@windriver.com> References: <272816eb-87c7-d7fa-a3bf-6ec3801442a6@windriver.com> Message-ID: <704b03d6-d318-80b6-ee68-3bd45a5038de@windriver.com> CENGN is restored. Scott On 2019-06-03 2:03 p.m., Scott Little wrote: > We are now experiencing a second outage at CENGN.  This one started at > about 1:50 EST.  I have opened another ticket. > > Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Mon Jun 3 19:35:38 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 3 Jun 2019 13:35:38 -0600 Subject: [Starlingx-discuss] [Containers] node-feature-discovery In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A97CE1E@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A97C6E4@fmsmsx101.amr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2A97CE1E@fmsmsx101.amr.corp.intel.com> Message-ID: <1cc6c7d5-0e61-ba8f-0a00-f0b0d80e1e68@windriver.com> Hi José, The docs are still in progress. Here's what I sent to the docs folks: The intent of this story was to deploy the functionality implemented by https://github.com/kubernetes-sigs/node-feature-discovery/tree/v0.3.0 as an optional component in StarlingX. Basically it detects hardware features available on each node in a Kubernetes cluster, and advertises those features using kubernetes node labels. You can then use the regular Kubernetes label-based functionality to specify the nodes with features that are of interest. (nodeSelector, "kubectl get node -l ", etc.) As of the beginning of May StarlingX should have a new file /opt/extracharts/node-feature-discovery-0.3.0.tgz which is a helm chart that provides the above functionality. Our version also allows for customization of various parameters via helm chart overrides. The configurable options and their defaults are as follows: # namespace to use for chart resources. Must be specified. namespace: default # label for the daemonset to find its pods app_label: node-feature-discovery # docker image to use for the pods image: quay.io/kubernetes_incubator/node-feature-discovery:v0.3.0 # interval (in secs) to scan the node features scan_interval: 60 # key/value pair to match against node labels to select which nodes # should run the node feature discovery. Defaults to all nodes. node_selector_key: node_selector_value: In the simple case where we want to run using all the default values, after initial install and configuration the helm chart can be installed by running "helm upgrade -i node-feature-discovery /opt/extracharts/node-feature-discovery-0.3.0.tgz". This should result in the creation of one pod per node which runs once per minute to update the node features. Thanks, Chris On 6/3/2019 9:11 AM, Perez Carranza, Jose wrote: > Hi Chris > > Just to follow up on below question, do you have any info to share about node-feature-discovery feature? > > Regards, > José > > >> -----Original Message----- >> From: Perez Carranza, Jose >> Sent: Wednesday, May 29, 2019 11:34 AM >> To: Chris Friesen ; starlingx- >> discuss at lists.starlingx.io >> Cc: Miller, Frank >> Subject: [Containers] node-feature-discovery >> >> Hi Chris >> >> I'm checking storyboard for node-feature-discovery [1] to design test scenarios >> about it, for me is not clear yet how to enable that feature on my deployment, >> are you able to explain more on how to do it? I also see that documentation >> was provided to docs team, are you able to point me out to that >> documentation so I can have more details of this feature implementation. >> >> 1. https://storyboard.openstack.org/#!/story/2005193 >> >> Regards, >> José >> > From scott.little at windriver.com Mon Jun 3 20:12:58 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 3 Jun 2019 16:12:58 -0400 Subject: [Starlingx-discuss] CENGN changes In-Reply-To: <704b03d6-d318-80b6-ee68-3bd45a5038de@windriver.com> References: <272816eb-87c7-d7fa-a3bf-6ec3801442a6@windriver.com> <704b03d6-d318-80b6-ee68-3bd45a5038de@windriver.com> Message-ID: <87fc5603-1170-2b80-e181-721ee25f92df@windriver.com> We've taken the opportunity presented by the latest outage to cut http://mirror.starlingx.cengn.ca over to a new kubernetes based server. It uses nginx rather than lighttpd, so you might see a few small formatting changes. If there are any issues, please report them to me. Scott On 2019-06-03 2:31 p.m., Scott Little wrote: > CENGN is restored. > > Scott > > > > On 2019-06-03 2:03 p.m., Scott Little wrote: >> We are now experiencing a second outage at CENGN.  This one started >> at about 1:50 EST.  I have opened another ticket. >> >> Scott >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Mon Jun 3 21:24:08 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 3 Jun 2019 21:24:08 +0000 Subject: [Starlingx-discuss] Distro.openstack agenda for June 4th Message-ID: <9A85D2917C58154C960D95352B22818BD074E92B@fmsmsx123.amr.corp.intel.com> Nova Placement changes - Zhipeng Nova rebase - NUMA backport patch Zuul failures - Ya Wang Bugs - review assigned, triage new bugs -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Mon Jun 3 22:28:49 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 3 Jun 2019 22:28:49 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190603 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-03 (link) Status: RED ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 49 TCs | BLOCKED Sanity-Platform 11 TCs | BLOCKED ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs | BLOCKED Sanity-Platform 09 TCs | BLOCKED ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs | BLOCKED Sanity-Platform 09 TCs | BLOCKED ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs | BLOCKED Sanity-Platform 05 TCs | BLOCKED ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 49 TCs | BLOCKED Sanity Platform 07 TCs | BLOCKED ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 51 TCs | BLOCKED Sanity Platform 05 TCs | BLOCKED ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs | BLOCKED Sanity Platform 05 TCs | BLOCKED ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs | BLOCKED Sanity Platform 05 TCs | BLOCKED ------------------------------ TOTAL: 61 TCs Master controller reboots after ansible play https://bugs.launchpad.net/starlingx/+bug/1831485 registry.local hostname not stored in docker proxy configuration https://bugs.launchpad.net/starlingx/+bug/1831507 Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Jun 3 22:46:40 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 3 Jun 2019 22:46:40 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 06/04/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CE05F1D@FMSMSX114.amr.corp.intel.com> Meeting agenda for 06/04/2019 1. Sanity status - Cristopher 2. Feature testing progress - All Containers - OpenStack patch elimination - CentOS 7.6 - QAT Containerized OVS - OVS-DPDK firewall - OVS pmon - Ceph upgrade - 3 . Opens - All From sgw at linux.intel.com Mon Jun 3 22:56:44 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 3 Jun 2019 15:56:44 -0700 Subject: [Starlingx-discuss] Flock Versioning for packaging In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD0740DE9@fmsmsx123.amr.corp.intel.com> Message-ID: <1864478b-e052-6872-8537-9319d1c15253@linux.intel.com> On 5/16/19 10:33 AM, Dean Troyer wrote: > On Wed, May 15, 2019 at 3:19 PM Bailey, Henry Albert (Al) > wrote: >> If we update the build tool (and remove those variables from the spec files), then all python components in a particular repo will have the same version. >> We have some repos where there are multiple python components in the same repo. > > As you note PBR and other OpenStack tooling has the assumption that > everything in a git repo is related and is a single "thing". This > could be changed, thus far it really has been easier to break out > common components. We have to work around this in other areas too, > such as maintaining multiple tox job definitions rather than using a > single top-level tox.ini. > So, there have been a couple of replies to this thread, I would like to bring it back to the top of the queue for something to address in STX-3.0 release. It seems we will need to add tagging back into our process for proper PBR support to ensure the flock get the correct version. If items need independent versioning then we may have to manually handle the versioning or possibly split them out as their own repo depending on the requirements. > I think we need to break out more parts from the existing repos but > within the same sub-project teams. I would start with either major > pieces (inventory) or the small dependencies (tsconfig, > fm-common/fm-core) and clients (cgts-client). I have done an > experiment with cgts-client that took a couple of hours and is even > mostly automated and maintains the git history. > As we work on the openSUSE specfiles, it seems part of the way tsconfig works is to copy one directory into the Source RPM in Centos so everything is in the correct location (tsconfig/scripts is copied into tsconfig/tsconfig). Using the generic tarball [0] from stx-update, the scripts directory is not in the same place relative to the tsconfig setup.py. [0] https://opendev.org/starlingx/update/archive/master.tar.gz I hope this makes sense, I got into the weeds a little. Sau! > dt > From build.starlingx at gmail.com Tue Jun 4 01:58:01 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 3 Jun 2019 21:58:01 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 289 - Failure! Message-ID: <1830454047.2.1559613483869.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 289 Status: Failure Timestamp: 20190604T000010Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190603T233000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190603T233000Z DOCKER_BUILD_ID: jenkins-master-20190603T233000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190603T233000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190603T233000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Tue Jun 4 01:58:06 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 3 Jun 2019 21:58:06 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 127 - Failure! Message-ID: <1643442174.5.1559613487534.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 127 Status: Failure Timestamp: 20190603T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190603T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From yong.hu at intel.com Tue Jun 4 03:46:30 2019 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 4 Jun 2019 03:46:30 +0000 Subject: [Starlingx-discuss] Informations about StarlingX Message-ID: Hi Adrien Macor, To help you quickly catch up about StarlingX, you can get more info from https://starlingx.io Once you have some basic knowledge about StarlingX, and you like to have some hands-on, you can reach out to this community. For your questions below, pls see my inline comments. Regards, Yong On 04/06/2019, 12:19 AM, "Adrien Macor" > wrote: Hi, I'm currently doing my bachelor thesis in the School of Engineering and Architecture of Fribourg. The name of this project is "Edge cloud orchestration and monitoring" and I'm now interresting on StarlingX. I have a few question for you: 1. I read this documentation: https://docs.starlingx.io/deployment_guides/current/duplex.html StarlingX Docs: All-in-one duplex stx.2018.10 NOTE: The instructions to setup a StarlingX All-in-One Duplex (AIO-DX) with containerized openstack services in this guide are under development. For approved instructions, see the All in One Duplex Configuration wiki page. docs.starlingx.io 2. , but something is still not clear for me: let's suppose I have three physical servers; two on the same area, and the last one elsewhere (not on the same network) . May I use this infrastructure with StarlingX? If I understand correctly, as long as I have the connectivity between all my servers it's possible. 3. StarlingX enables to deploy VNF over the edge, but what's his differences with something like Vmware, another virtualization tool? 4. Last one: is it better to deploy one VNF over all my edge sites, or deploy multiple VNFs over all my edge sites, or split one VNF over all my edge... and so on. Or it depends on what the VNF is? Thanks for the reply Adriano Macor -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Tue Jun 4 05:27:48 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 4 Jun 2019 05:27:48 +0000 Subject: [Starlingx-discuss] fix solution proposal for LP1826047 Message-ID: <56829C2A36C2E542B0CCB9854828E4D856223DBA@CDSMSX102.ccr.corp.intel.com> Hi I am check this launch pad issue. This issue is failed, when system application apply(retrieving docker image), sysinv-api exit unexpected. When service manager re-launch sysinv-api again, application status is in applying status. So application could not be removed. There are two solutions. Solution 1, when sysinv-api or sysinv-conductor launch, in __init__ function, check application status in database, if status is "uploading", "applying" or "removing", change the status to "upload-failed", "apply-failed" or "removed-failed" Solution 2, add perform-abort action for upload or apply. Use a flag to quickly exit and upload and apply action, and set database to "upload-failed" or "apply-failed". https://bugs.launchpad.net/starlingx/+bug/1826047 Wait for brain-storming. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Tue Jun 4 05:43:36 2019 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 4 Jun 2019 05:43:36 +0000 Subject: [Starlingx-discuss] about openstack-helm version Message-ID: <223740C9-1C06-401E-90AE-B4F5A8AA725D@intel.com> Currently in "~/stx-upstream/openstack/openstack-helm/centos/build_srpm.data", we are using openstack-helm at commit ID "6c71637222f47d85681038994f02feac92f75bd2". However, after "6c716372", there were some fixes for issues we reported, such as, https://bugs.launchpad.net/starlingx/+bug/1829793 so, here come questions: 1. what is the tactic to sync up or update this kind of upstream project? 2. for stx.2.0, have we decided that we lock down the versions of openstack upstream projects? Regards, Yong From chenjie.xu at intel.com Tue Jun 4 12:12:49 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 4 Jun 2019 12:12:49 +0000 Subject: [Starlingx-discuss] Docker images missing in local registry Message-ID: Hi team, Is there anybody who can execute the following commands to list the docker images and attach the "image.txt"? Then I can compare the docker images with mine and download the missing docker images to our local registry. sudo docker images > images.txt I failed to set up StarlingX AIO Simplex with 0523 and 0527 ISO image because platform-integ-apps failed to be applied. The reason why platform-integ-apps can't be applied is that some docker images can't be downloaded since I'm using a local registry which doesn't have the missing docker images. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jun 4 12:21:00 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 4 Jun 2019 12:21:00 +0000 Subject: [Starlingx-discuss] Docker images missing in local registry In-Reply-To: References: Message-ID: ChenJie: Please check if your docker images includes calico/cni: v3.6.2 or not. Thanks. BR Austin Sun. From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, June 4, 2019 8:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docker images missing in local registry Hi team, Is there anybody who can execute the following commands to list the docker images and attach the "image.txt"? Then I can compare the docker images with mine and download the missing docker images to our local registry. sudo docker images > images.txt I failed to set up StarlingX AIO Simplex with 0523 and 0527 ISO image because platform-integ-apps failed to be applied. The reason why platform-integ-apps can't be applied is that some docker images can't be downloaded since I'm using a local registry which doesn't have the missing docker images. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jun 4 12:42:01 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 4 Jun 2019 12:42:01 +0000 Subject: [Starlingx-discuss] Distro.openstack agenda for June 4th In-Reply-To: <9A85D2917C58154C960D95352B22818BD074E92B@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD074E92B@fmsmsx123.amr.corp.intel.com> Message-ID: <93814834B4855241994F290E959305C7530898BA@SHSMSX104.ccr.corp.intel.com> Hi Bruce and all, Below is some update for openstack-placement containerized task 4 below patches submitted for review. Got some comments and updated patches. So far, I already started containerized debug, found and fixed some issues. Now focus on database setup for placement service 1) Add stx-placement docker image directives files https://review.opendev.org/#/c/661679/ 2) WIP: add placement chart ( submitted to openstack-helm project) https://review.opendev.org/#/c/662229/ 3) Add placement chart patch to openstack-helm https://review.opendev.org/#/c/662371/ 4) Add placement chart to armada system https://review.opendev.org/#/c/662614/ Your comments are appreciated! Thanks! Zhipeng From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: 2019年6月4日 5:24 To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] Distro.openstack agenda for June 4th Nova Placement changes - Zhipeng Nova rebase - NUMA backport patch Zuul failures - Ya Wang Bugs - review assigned, triage new bugs -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Tue Jun 4 12:52:18 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Tue, 4 Jun 2019 12:52:18 +0000 Subject: [Starlingx-discuss] fix solution proposal for LP1826047 In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D856223DBA@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D856223DBA@CDSMSX102.ccr.corp.intel.com> Message-ID: Solution 1 sounds like how Openstack Heat handles an abnormal termination of the heat-engine (how it needs to properly clean up its stack locks) Solution 2 is similar to what I have seen for nova and cinder CLI options. Personally, I like option 1. Al From: Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] Sent: Tuesday, June 04, 2019 1:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] fix solution proposal for LP1826047 Hi I am check this launch pad issue. This issue is failed, when system application apply(retrieving docker image), sysinv-api exit unexpected. When service manager re-launch sysinv-api again, application status is in applying status. So application could not be removed. There are two solutions. Solution 1, when sysinv-api or sysinv-conductor launch, in __init__ function, check application status in database, if status is "uploading", "applying" or "removing", change the status to "upload-failed", "apply-failed" or "removed-failed" Solution 2, add perform-abort action for upload or apply. Use a flag to quickly exit and upload and apply action, and set database to "upload-failed" or "apply-failed". https://bugs.launchpad.net/starlingx/+bug/1826047 Wait for brain-storming. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pvmpublic at gmail.com Tue Jun 4 13:11:12 2019 From: pvmpublic at gmail.com (Pratik M.) Date: Tue, 4 Jun 2019 18:41:12 +0530 Subject: [Starlingx-discuss] Hands-on workshop recording link? Message-ID: Hi, Was the recent StarlingX hands-on workshop recorded? Would appreciate if someone can send the link. https://www.openstack.org/summit/denver-2019/summit-schedule/events/23630/starlingx-hands-on-workshop Does anyone have a recommendation for latest iso for a 2.0 All-in-One duplex install? I looked at a couple of sanity reports but wasn't sure. Also mirror.starlingx.cengn.ca seems to be un-reachable. Thanks in advance Pratik From Al.Bailey at windriver.com Tue Jun 4 13:21:01 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Tue, 4 Jun 2019 13:21:01 +0000 Subject: [Starlingx-discuss] about openstack-helm version In-Reply-To: <223740C9-1C06-401E-90AE-B4F5A8AA725D@intel.com> References: <223740C9-1C06-401E-90AE-B4F5A8AA725D@intel.com> Message-ID: For question 1, I think typically the tactic for dealing with upstream fixes is - we source the solution upstream ( in this case, it looks like a solution already exists) - we apply that patch in our repo (example: https://opendev.org/starlingx/upstream/commit/c4bed237e9d60f1bd1cc68400e5548b561a298a0 ) - once we rebase to a newer version of that upstream component, we remove the local patches that already exist upstream. If all of our local patches are no longer needed, and we no longer need to build src rpms, we may switch to prebuilt rpms. - rebasing to a newer version of a component is typically driven by a story, since there can be complications and oftentimes these need to be synchronized For question 2 the openstack upstream projects are locked down through the branch in the manifest (default.xml) Not all are locked down in manifest, but that’s mostly because I'm still cleaning them out (as things are becoming containerized they do not need to be checked out into the workspace) We also include upstream components through lst files in https://opendev.org/starlingx/tools/src/branch/master/centos-mirror-tools Sometimes we need to update these, depending on the problem we are trying to fix. Al -----Original Message----- From: Hu, Yong [mailto:yong.hu at intel.com] Sent: Tuesday, June 04, 2019 1:44 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] about openstack-helm version Currently in "~/stx-upstream/openstack/openstack-helm/centos/build_srpm.data", we are using openstack-helm at commit ID "6c71637222f47d85681038994f02feac92f75bd2". However, after "6c716372", there were some fixes for issues we reported, such as, https://bugs.launchpad.net/starlingx/+bug/1829793 so, here come questions: 1. what is the tactic to sync up or update this kind of upstream project? 2. for stx.2.0, have we decided that we lock down the versions of openstack upstream projects? Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From serverascode at gmail.com Tue Jun 4 13:23:41 2019 From: serverascode at gmail.com (Curtis) Date: Tue, 4 Jun 2019 09:23:41 -0400 Subject: [Starlingx-discuss] Hands-on workshop recording link? In-Reply-To: References: Message-ID: On Tue, Jun 4, 2019 at 9:13 AM Pratik M. wrote: > Hi, > Was the recent StarlingX hands-on workshop recorded? Would appreciate > if someone can send the link. > > https://www.openstack.org/summit/denver-2019/summit-schedule/events/23630/starlingx-hands-on-workshop > > Hi, It was not recorded, I don't think the summit workshops usually are. We should have maybe recorded a demo version, but we didn't. Something to think about for future workshops. Sorry, Curtis > Does anyone have a recommendation for latest iso for a 2.0 All-in-One > duplex install? I looked at a couple of sanity reports but wasn't > sure. > > Also mirror.starlingx.cengn.ca seems to be un-reachable. > > Thanks in advance > Pratik > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tee.Ngo at windriver.com Tue Jun 4 13:26:30 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Tue, 4 Jun 2019 13:26:30 +0000 Subject: [Starlingx-discuss] fix solution proposal for LP1826047 In-Reply-To: References: <56829C2A36C2E542B0CCB9854828E4D856223DBA@CDSMSX102.ccr.corp.intel.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE44CA2985E@ALA-MBD.corp.ad.wrs.com> It should be a combination of 2 solutions as solution #2 provides the option to bail while sysinv processes are running a) application thread stalled (e.g. during the cleanup of (test) mariadb pods/stx-openstack namespace issue seen in the past and the system became sluggish) b) client simply wants to abort the operation Solution #2 is more involved than just updating the app state in the database. Tee From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: June-04-19 8:52 AM To: Chen, Haochuan Z; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] fix solution proposal for LP1826047 Solution 1 sounds like how Openstack Heat handles an abnormal termination of the heat-engine (how it needs to properly clean up its stack locks) Solution 2 is similar to what I have seen for nova and cinder CLI options. Personally, I like option 1. Al From: Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] Sent: Tuesday, June 04, 2019 1:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] fix solution proposal for LP1826047 Hi I am check this launch pad issue. This issue is failed, when system application apply(retrieving docker image), sysinv-api exit unexpected. When service manager re-launch sysinv-api again, application status is in applying status. So application could not be removed. There are two solutions. Solution 1, when sysinv-api or sysinv-conductor launch, in __init__ function, check application status in database, if status is "uploading", "applying" or "removing", change the status to "upload-failed", "apply-failed" or "removed-failed" Solution 2, add perform-abort action for upload or apply. Use a flag to quickly exit and upload and apply action, and set database to "upload-failed" or "apply-failed". https://bugs.launchpad.net/starlingx/+bug/1826047 Wait for brain-storming. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Jun 4 15:27:46 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 4 Jun 2019 15:27:46 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting June 4th 2019 Message-ID: <9A85D2917C58154C960D95352B22818BD074F544@fmsmsx123.amr.corp.intel.com> 6/4 meeting * Nova Placement changes - Zhipeng * Story: https://storyboard.openstack.org/#!/story/2005750 ? 1) Add stx-placement docker image directives files ? https://review.opendev.org/#/c/661679/ ? 2) WIP: add placement chart ( submitted to openstack-helm project) ? https://review.opendev.org/#/c/662229/ ? 3) Add placement chart patch to openstack-helm ? https://review.opendev.org/#/c/662371/ ? 4) Add placement chart to armada system ? https://review.opendev.org/#/c/662614/ * Nova rebase - NUMA backport patch Zuul failures - Ya Wang * PR posted to github. https://github.com/starlingx-staging/stx-nova/pull/24 Dean can run the tests against it. Test jobs started in this review: https://review.opendev.org/#/c/656065/. Late breaking news - the tests passed. The PR can merge. Please post the fix(es) to Nova. * Bugs * https://bugs.launchpad.net/starlingx/+bug/1822366 - is there someone who has the right hardware for this? Intel 82599 (Niantic) 10 G * Shuquan is trying to take over the vCPU model spec, but needs help in re-proposing the spec. Bruce to raise with Eric and Alex. * Late breaking news - the patch for "Fix shell upload-to-image with no volume type" is being backported to stable/stein: https://review.opendev.org/#/c/662782/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Tue Jun 4 15:34:26 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 4 Jun 2019 10:34:26 -0500 Subject: [Starlingx-discuss] Distro.openstack meeting June 4th 2019 In-Reply-To: <9A85D2917C58154C960D95352B22818BD074F544@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD074F544@fmsmsx123.amr.corp.intel.com> Message-ID: On Tue, Jun 4, 2019 at 10:29 AM Jones, Bruce E wrote: > PR posted to github. https://github.com/starlingx-staging/stx-nova/pull/24 Dean can run the tests against it. Test jobs started in this review: https://review.opendev.org/#/c/656065/. Late breaking news - the tests passed. The PR can merge. Please post the fix(es) to Nova. I will merge this this afternoon[0] if there are no objections or other specific timing requirements. It would be nice to have some additional LGTM comments on the PR just for, you know, posterity. dt [0] I'll be AFK for a few hours beginning soon...if someone wants to beat me to it, go ahead... -- Dean Troyer dtroyer at gmail.com From serverascode at gmail.com Tue Jun 4 16:32:20 2019 From: serverascode at gmail.com (Curtis) Date: Tue, 4 Jun 2019 12:32:20 -0400 Subject: [Starlingx-discuss] [packet-sig] Can't make today's meeting Message-ID: Hi All, Unfortunately I can't make today's meeting. Please feel free to meet and discuss and I will read the notes. Perhaps we can change this meeting to run once every couple of weeks or at a non-weekly cadence of some kind. Thanks, and apologies, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jun 4 17:06:31 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 4 Jun 2019 17:06:31 +0000 Subject: [Starlingx-discuss] [packet-sig] Can't make today's meeting In-Reply-To: References: Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A4933A@ALA-MBD.corp.ad.wrs.com> Hi Curtis, +1 to making this a bi-weekly thing – starting next week? From: Curtis Sent: Tuesday, June 4, 2019 12:32 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [packet-sig] Can't make today's meeting Hi All, Unfortunately I can't make today's meeting. Please feel free to meet and discuss and I will read the notes. Perhaps we can change this meeting to run once every couple of weeks or at a non-weekly cadence of some kind. Thanks, and apologies, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Tue Jun 4 17:13:47 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 4 Jun 2019 17:13:47 +0000 Subject: [Starlingx-discuss] [Containers] node-feature-discovery In-Reply-To: <1cc6c7d5-0e61-ba8f-0a00-f0b0d80e1e68@windriver.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A97C6E4@fmsmsx101.amr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2A97CE1E@fmsmsx101.amr.corp.intel.com>, <1cc6c7d5-0e61-ba8f-0a00-f0b0d80e1e68@windriver.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A97E25E@fmsmsx101.amr.corp.intel.com> Hi Cris Thanks for your info. I was able to install node-feature-discovery, following the requirements described on storyboard [1], I'm checking that I should be able to deploy a "demonset" for master and worker nodes, Can I use the template defined on github [2]? Basically I' following the instruction provided there there and running below: `sed -E s',^(\s*)image:.+$,\1image: quay.io/kubernetes_incubator/node-feature-discovery:v0.3.0,' nfd-master.yaml.template > nfd-master.yaml && kubectl create -f nfd-master.yaml` But I'm geting below error: `$ kubectl create -f nfd-master.yaml Unable to connect to the server: net/http: TLS handshake timeout` and now I cannot execute any other kubectl command 1. https://storyboard.openstack.org/#!/story/2005193 2. https://github.com/kubernetes-sigs/node-feature-discovery/ Thanks Jose ________________________________________ From: Chris Friesen [chris.friesen at windriver.com] Sent: Monday, June 03, 2019 2:35 PM To: Perez Carranza, Jose; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Containers] node-feature-discovery Hi José, The docs are still in progress. Here's what I sent to the docs folks: The intent of this story was to deploy the functionality implemented by https://github.com/kubernetes-sigs/node-feature-discovery/tree/v0.3.0 as an optional component in StarlingX. Basically it detects hardware features available on each node in a Kubernetes cluster, and advertises those features using kubernetes node labels. You can then use the regular Kubernetes label-based functionality to specify the nodes with features that are of interest. (nodeSelector, "kubectl get node -l ", etc.) As of the beginning of May StarlingX should have a new file /opt/extracharts/node-feature-discovery-0.3.0.tgz which is a helm chart that provides the above functionality. Our version also allows for customization of various parameters via helm chart overrides. The configurable options and their defaults are as follows: # namespace to use for chart resources. Must be specified. namespace: default # label for the daemonset to find its pods app_label: node-feature-discovery # docker image to use for the pods image: quay.io/kubernetes_incubator/node-feature-discovery:v0.3.0 # interval (in secs) to scan the node features scan_interval: 60 # key/value pair to match against node labels to select which nodes # should run the node feature discovery. Defaults to all nodes. node_selector_key: node_selector_value: In the simple case where we want to run using all the default values, after initial install and configuration the helm chart can be installed by running "helm upgrade -i node-feature-discovery /opt/extracharts/node-feature-discovery-0.3.0.tgz". This should result in the creation of one pod per node which runs once per minute to update the node features. Thanks, Chris On 6/3/2019 9:11 AM, Perez Carranza, Jose wrote: > Hi Chris > > Just to follow up on below question, do you have any info to share about node-feature-discovery feature? > > Regards, > José > > >> -----Original Message----- >> From: Perez Carranza, Jose >> Sent: Wednesday, May 29, 2019 11:34 AM >> To: Chris Friesen ; starlingx- >> discuss at lists.starlingx.io >> Cc: Miller, Frank >> Subject: [Containers] node-feature-discovery >> >> Hi Chris >> >> I'm checking storyboard for node-feature-discovery [1] to design test scenarios >> about it, for me is not clear yet how to enable that feature on my deployment, >> are you able to explain more on how to do it? I also see that documentation >> was provided to docs team, are you able to point me out to that >> documentation so I can have more details of this feature implementation. >> >> 1. https://storyboard.openstack.org/#!/story/2005193 >> >> Regards, >> José >> > From scott.little at windriver.com Tue Jun 4 19:28:36 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 4 Jun 2019 15:28:36 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 127 - Failure! In-Reply-To: <1643442174.5.1559613487534.JavaMail.javamailuser@localhost> References: <1643442174.5.1559613487534.JavaMail.javamailuser@localhost> Message-ID: <71cf5a10-ec74-b378-4083-847f03945e04@windriver.com> Unexplained failure of losetup to allocate a loop back device.  Nothing useful in the logs.  Not reproducible this afternoon.  Rebuild passed. Scott On 2019-06-03 9:58 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_master > Build #: 127 > Status: Failure > Timestamp: 20190603T233000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190603T233000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Tue Jun 4 19:47:18 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 4 Jun 2019 14:47:18 -0500 Subject: [Starlingx-discuss] Distro.openstack meeting June 4th 2019 In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD074F544@fmsmsx123.amr.corp.intel.com> Message-ID: On Tue, Jun 4, 2019 at 10:34 AM Dean Troyer wrote: > I will merge this this afternoon[0] if there are no objections or > other specific timing requirements. It would be nice to have some > additional LGTM comments on the PR just for, you know, posterity. This PR has been merged into stx/stein.1 branch of stx-nova. dt -- Dean Troyer dtroyer at gmail.com From maria.g.perez.ibarra at intel.com Tue Jun 4 21:48:11 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 4 Jun 2019 21:48:11 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190602 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-02 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 2 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 2 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs Cannot get prompt after login in ssh or serial https://bugs.launchpad.net/starlingx/+bug/1829941 Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Tue Jun 4 23:08:09 2019 From: serverascode at gmail.com (Curtis) Date: Tue, 4 Jun 2019 19:08:09 -0400 Subject: [Starlingx-discuss] [packet-sig] Can't make today's meeting In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC0A4933A@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC0A4933A@ALA-MBD.corp.ad.wrs.com> Message-ID: On Tue, Jun 4, 2019 at 1:07 PM Zvonar, Bill wrote: > Hi Curtis, +1 to making this a bi-weekly thing – starting next week? > Sure, starting next week! Sounds great. Thanks, Curtis > > > *From:* Curtis > *Sent:* Tuesday, June 4, 2019 12:32 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] [packet-sig] Can't make today's meeting > > > > Hi All, > > > > Unfortunately I can't make today's meeting. Please feel free to meet and > discuss and I will read the notes. > > > > Perhaps we can change this meeting to run once every couple of weeks or at > a non-weekly cadence of some kind. > > > > Thanks, and apologies, > > Curtis > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Wed Jun 5 00:34:03 2019 From: yong.hu at intel.com (Hu, Yong) Date: Wed, 5 Jun 2019 00:34:03 +0000 Subject: [Starlingx-discuss] [Containers] node-feature-discovery Message-ID: <10FD9C3E-7A2E-4798-861E-7F362CE60C85@intel.com> You might have a try to pull this docker image manually to assure your network reachable to quay.io $ sudo docker pull quay.io/kubernetes_incubator/node-feature-discovery:v0.3.0 On 05/06/2019, 1:15 AM, "Perez Carranza, Jose" wrote: Hi Cris Thanks for your info. I was able to install node-feature-discovery, following the requirements described on storyboard [1], I'm checking that I should be able to deploy a "demonset" for master and worker nodes, Can I use the template defined on github [2]? Basically I' following the instruction provided there there and running below: `sed -E s',^(\s*)image:.+$,\1image: quay.io/kubernetes_incubator/node-feature-discovery:v0.3.0,' nfd-master.yaml.template > nfd-master.yaml && kubectl create -f nfd-master.yaml` But I'm geting below error: `$ kubectl create -f nfd-master.yaml Unable to connect to the server: net/http: TLS handshake timeout` and now I cannot execute any other kubectl command 1. https://storyboard.openstack.org/#!/story/2005193 2. https://github.com/kubernetes-sigs/node-feature-discovery/ Thanks Jose ________________________________________ From: Chris Friesen [chris.friesen at windriver.com] Sent: Monday, June 03, 2019 2:35 PM To: Perez Carranza, Jose; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Containers] node-feature-discovery Hi José, The docs are still in progress. Here's what I sent to the docs folks: The intent of this story was to deploy the functionality implemented by https://github.com/kubernetes-sigs/node-feature-discovery/tree/v0.3.0 as an optional component in StarlingX. Basically it detects hardware features available on each node in a Kubernetes cluster, and advertises those features using kubernetes node labels. You can then use the regular Kubernetes label-based functionality to specify the nodes with features that are of interest. (nodeSelector, "kubectl get node -l ", etc.) As of the beginning of May StarlingX should have a new file /opt/extracharts/node-feature-discovery-0.3.0.tgz which is a helm chart that provides the above functionality. Our version also allows for customization of various parameters via helm chart overrides. The configurable options and their defaults are as follows: # namespace to use for chart resources. Must be specified. namespace: default # label for the daemonset to find its pods app_label: node-feature-discovery # docker image to use for the pods image: quay.io/kubernetes_incubator/node-feature-discovery:v0.3.0 # interval (in secs) to scan the node features scan_interval: 60 # key/value pair to match against node labels to select which nodes # should run the node feature discovery. Defaults to all nodes. node_selector_key: node_selector_value: In the simple case where we want to run using all the default values, after initial install and configuration the helm chart can be installed by running "helm upgrade -i node-feature-discovery /opt/extracharts/node-feature-discovery-0.3.0.tgz". This should result in the creation of one pod per node which runs once per minute to update the node features. Thanks, Chris On 6/3/2019 9:11 AM, Perez Carranza, Jose wrote: > Hi Chris > > Just to follow up on below question, do you have any info to share about node-feature-discovery feature? > > Regards, > José > > >> -----Original Message----- >> From: Perez Carranza, Jose >> Sent: Wednesday, May 29, 2019 11:34 AM >> To: Chris Friesen ; starlingx- >> discuss at lists.starlingx.io >> Cc: Miller, Frank >> Subject: [Containers] node-feature-discovery >> >> Hi Chris >> >> I'm checking storyboard for node-feature-discovery [1] to design test scenarios >> about it, for me is not clear yet how to enable that feature on my deployment, >> are you able to explain more on how to do it? I also see that documentation >> was provided to docs team, are you able to point me out to that >> documentation so I can have more details of this feature implementation. >> >> 1. https://storyboard.openstack.org/#!/story/2005193 >> >> Regards, >> José >> > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chenjie.xu at intel.com Wed Jun 5 01:35:45 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 5 Jun 2019 01:35:45 +0000 Subject: [Starlingx-discuss] Docker images missing in local registry In-Reply-To: References: Message-ID: Hi Austin, My calico/cni is v3.6.1 as following: 10.239.12.37:5000/calico/node v3.6.1 b4d7c4247c3a 2 months ago 73.2MB 10.239.12.37:5000/calico/cni v3.6.1 c7d27197e298 2 months ago 84.3MB 10.239.12.37:5000/calico/kube-controllers v3.6.1 0bd1f99c7034 2 months ago 50.9MB Do you mean I need to download calico 3.6.2? Best Regards, Xu, Chenjie From: Sun, Austin Sent: Tuesday, June 4, 2019 8:21 PM To: Xu, Chenjie ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Docker images missing in local registry ChenJie: Please check if your docker images includes calico/cni: v3.6.2 or not. Thanks. BR Austin Sun. From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, June 4, 2019 8:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docker images missing in local registry Hi team, Is there anybody who can execute the following commands to list the docker images and attach the "image.txt"? Then I can compare the docker images with mine and download the missing docker images to our local registry. sudo docker images > images.txt I failed to set up StarlingX AIO Simplex with 0523 and 0527 ISO image because platform-integ-apps failed to be applied. The reason why platform-integ-apps can't be applied is that some docker images can't be downloaded since I'm using a local registry which doesn't have the missing docker images. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Jun 5 01:49:53 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 5 Jun 2019 01:49:53 +0000 Subject: [Starlingx-discuss] Docker images missing in local registry In-Reply-To: References: Message-ID: Hi Chenjie: Yes. >From this change, https://review.opendev.org/#/c/661849/4/puppet-manifests/src/modules/platform/templates/calico.yaml.erb Calico upgrade to v3.6.2 BR Austin Sun. From: Xu, Chenjie Sent: Wednesday, June 5, 2019 9:36 AM To: Sun, Austin ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Docker images missing in local registry Hi Austin, My calico/cni is v3.6.1 as following: 10.239.12.37:5000/calico/node v3.6.1 b4d7c4247c3a 2 months ago 73.2MB 10.239.12.37:5000/calico/cni v3.6.1 c7d27197e298 2 months ago 84.3MB 10.239.12.37:5000/calico/kube-controllers v3.6.1 0bd1f99c7034 2 months ago 50.9MB Do you mean I need to download calico 3.6.2? Best Regards, Xu, Chenjie From: Sun, Austin Sent: Tuesday, June 4, 2019 8:21 PM To: Xu, Chenjie >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Docker images missing in local registry ChenJie: Please check if your docker images includes calico/cni: v3.6.2 or not. Thanks. BR Austin Sun. From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, June 4, 2019 8:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docker images missing in local registry Hi team, Is there anybody who can execute the following commands to list the docker images and attach the "image.txt"? Then I can compare the docker images with mine and download the missing docker images to our local registry. sudo docker images > images.txt I failed to set up StarlingX AIO Simplex with 0523 and 0527 ISO image because platform-integ-apps failed to be applied. The reason why platform-integ-apps can't be applied is that some docker images can't be downloaded since I'm using a local registry which doesn't have the missing docker images. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Wed Jun 5 02:01:48 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 5 Jun 2019 02:01:48 +0000 Subject: [Starlingx-discuss] Docker images missing in local registry In-Reply-To: References: Message-ID: Hi Austin, Thank you so much! I will give it a try. Best Regards, Xu, Chenjie From: Sun, Austin Sent: Wednesday, June 5, 2019 9:50 AM To: Xu, Chenjie ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Docker images missing in local registry Hi Chenjie: Yes. >From this change, https://review.opendev.org/#/c/661849/4/puppet-manifests/src/modules/platform/templates/calico.yaml.erb Calico upgrade to v3.6.2 BR Austin Sun. From: Xu, Chenjie Sent: Wednesday, June 5, 2019 9:36 AM To: Sun, Austin >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Docker images missing in local registry Hi Austin, My calico/cni is v3.6.1 as following: 10.239.12.37:5000/calico/node v3.6.1 b4d7c4247c3a 2 months ago 73.2MB 10.239.12.37:5000/calico/cni v3.6.1 c7d27197e298 2 months ago 84.3MB 10.239.12.37:5000/calico/kube-controllers v3.6.1 0bd1f99c7034 2 months ago 50.9MB Do you mean I need to download calico 3.6.2? Best Regards, Xu, Chenjie From: Sun, Austin Sent: Tuesday, June 4, 2019 8:21 PM To: Xu, Chenjie >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Docker images missing in local registry ChenJie: Please check if your docker images includes calico/cni: v3.6.2 or not. Thanks. BR Austin Sun. From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, June 4, 2019 8:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docker images missing in local registry Hi team, Is there anybody who can execute the following commands to list the docker images and attach the "image.txt"? Then I can compare the docker images with mine and download the missing docker images to our local registry. sudo docker images > images.txt I failed to set up StarlingX AIO Simplex with 0523 and 0527 ISO image because platform-integ-apps failed to be applied. The reason why platform-integ-apps can't be applied is that some docker images can't be downloaded since I'm using a local registry which doesn't have the missing docker images. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Jun 5 02:23:50 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 4 Jun 2019 22:23:50 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 291 - Failure! Message-ID: <1904059208.10.1559701432171.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 291 Status: Failure Timestamp: 20190605T021938Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190605T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190605T013000Z DOCKER_BUILD_ID: jenkins-master-20190605T013000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190605T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190605T013000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed Jun 5 02:23:54 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 4 Jun 2019 22:23:54 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 129 - Failure! Message-ID: <1114504760.13.1559701435715.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 129 Status: Failure Timestamp: 20190605T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190605T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From cheng1.li at intel.com Wed Jun 5 04:00:33 2019 From: cheng1.li at intel.com (Li, Cheng1) Date: Wed, 5 Jun 2019 04:00:33 +0000 Subject: [Starlingx-discuss] Docker images missing in local registry In-Reply-To: References: Message-ID: Hello, As you know, the image tags changed frequently in last few months, which makes it hard to maintain the local repo. I don't know an easy way to get the latest image list. To my knowledge, the only way to get the image list is to install STX without using local repo. But that takes much time to download image from docker.io, gcr.io, etc. As you know, we have many versions of helm charts[1]. It means that I need to install STX multiple times to collect the image list of these helm charts. So I wonder if it's possible to publish an image-list file together with our daily built. For example, we run docker images > images-centos-dev-latest.txt in the Sanity test and publish the images-centos-dev-latest.txt onto http://mirror.starlingx.cengn.ca. [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190604T144018Z/outputs/helm-charts/ Thanks, Cheng From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, June 5, 2019 10:02 AM To: Sun, Austin ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docker images missing in local registry Hi Austin, Thank you so much! I will give it a try. Best Regards, Xu, Chenjie From: Sun, Austin Sent: Wednesday, June 5, 2019 9:50 AM To: Xu, Chenjie >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Docker images missing in local registry Hi Chenjie: Yes. >From this change, https://review.opendev.org/#/c/661849/4/puppet-manifests/src/modules/platform/templates/calico.yaml.erb Calico upgrade to v3.6.2 BR Austin Sun. From: Xu, Chenjie Sent: Wednesday, June 5, 2019 9:36 AM To: Sun, Austin >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Docker images missing in local registry Hi Austin, My calico/cni is v3.6.1 as following: 10.239.12.37:5000/calico/node v3.6.1 b4d7c4247c3a 2 months ago 73.2MB 10.239.12.37:5000/calico/cni v3.6.1 c7d27197e298 2 months ago 84.3MB 10.239.12.37:5000/calico/kube-controllers v3.6.1 0bd1f99c7034 2 months ago 50.9MB Do you mean I need to download calico 3.6.2? Best Regards, Xu, Chenjie From: Sun, Austin Sent: Tuesday, June 4, 2019 8:21 PM To: Xu, Chenjie >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Docker images missing in local registry ChenJie: Please check if your docker images includes calico/cni: v3.6.2 or not. Thanks. BR Austin Sun. From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, June 4, 2019 8:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docker images missing in local registry Hi team, Is there anybody who can execute the following commands to list the docker images and attach the "image.txt"? Then I can compare the docker images with mine and download the missing docker images to our local registry. sudo docker images > images.txt I failed to set up StarlingX AIO Simplex with 0523 and 0527 ISO image because platform-integ-apps failed to be applied. The reason why platform-integ-apps can't be applied is that some docker images can't be downloaded since I'm using a local registry which doesn't have the missing docker images. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jun 5 06:23:50 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 5 Jun 2019 06:23:50 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 6/5 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F822C9@SHSMSX104.ccr.corp.intel.com> Agenda: 1. Stx.2.0 storyboard review (Yong/Saul) 2. Ceph upgrade update a. Remaining test cases (Fernando) b. Remaining LP (Tingjie/Martin) 3. QAT upgrade update a. Test status (Ricard/Haitao) 4. Kernel upgrade status to 3.10.0-957.12.2 (Haitao) Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'starlingx-discuss at lists.starlingx.io'; Wold, Saul; 'Rowsell, Brent' Cc: 'Carlos Cebrian'; 'Waines, Greg'; 'Zhi Zhi2 Chang'; 'Eslimi, Dariush'; Armstrong, Robert H; Jones, Bruce E; Gomez, Juan P; 'Seiler, Glenn'; Chen, Tingjie; Cobbley, David A; Badea, Daniel; Chen, Jacky; Hu, Wei W Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 5, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jun 5 10:37:33 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 5 Jun 2019 10:37:33 +0000 Subject: [Starlingx-discuss] Community Call (June 5, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A4AA28@ALA-MBD.corp.ad.wrs.com> Reminder of today's Community call - topics include... - an overview of the Community activity dashboard - MS-3 planning - what do we need to go deep on in tomorrow's release planning meeting - bug count / resolution forecast - TSC election recap Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso= 20190605T1400 From ildiko.vancsa at gmail.com Wed Jun 5 12:37:19 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 5 Jun 2019 14:37:19 +0200 Subject: [Starlingx-discuss] CFP reminders Message-ID: Hi StarlingX Community, I wanted to draw your attention to a few CFP deadlines that are approaching quickly: * June 16 - ONS Europe - https://events.linuxfoundation.org/events/open-networking-summit-europe-2019/program/cfp/ * June 22 - OpenInfra Days Nordic - https://www.papercall.io/oidn-stockholm-2019 * July 2 - Open Infrastructure Summit Shanghai - https://www.openstack.org/summit/shanghai-2019&_eboga=204325035.1551339844 Please let me know if you need help with your session proposals. Thanks, Ildikó From haochuan.z.chen at intel.com Wed Jun 5 13:29:09 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Wed, 5 Jun 2019 13:29:09 +0000 Subject: [Starlingx-discuss] fix solution proposal for LP1826047 In-Reply-To: <80ED4CE81E3D8F4099306648E95DAFE44CA2985E@ALA-MBD.corp.ad.wrs.com> References: <56829C2A36C2E542B0CCB9854828E4D856223DBA@CDSMSX102.ccr.corp.intel.com> <80ED4CE81E3D8F4099306648E95DAFE44CA2985E@ALA-MBD.corp.ad.wrs.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D856225127@CDSMSX102.ccr.corp.intel.com> Hi Ngo & Bailey Today I add these lines in stx-config/sysinv/sysinv/sysinv/sysinv/conductor/kube_app.py, function __init__ system = dbapi.isystem_get_one() if system.capabilities.get('kubernetes_enabled', False): LOG.error("kubernetes_enabled") for rpc_app in dbapi.kube_app_get_all(): app = AppOperator.Application(rpc_app, rpc_app.get('name') in self._helm.get_helm_applications()) op = "" if app.status == constants.APP_APPLY_IN_PROGRESS: op = constants.APP_APPLY_OP elif app.status == constants.APP_UPLOAD_IN_PROGRESS: op = constants.APP_UPLOAD_OP elif app.status == constants.APP_REMOVE_IN_PROGRESS: op = constants.APP_REMOVE_OP if op: self._abort_operation(app, op) LOG.info("reset %s to %s failed" % (app.name, op)) But it will fail with 2019-06-05 07:54:11.549 1987918 WARNING ceph_client [-] skip checking server certificate 2019-06-05 07:54:12.055 1987918 ERROR sysinv.conductor.kube_app [-] kubernetes_enabled 2019-06-05 07:54:12.062 1987918 ERROR sysinv.openstack.common.threadgroup [-] Cannot call save on orphaned KubeApp object 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup Traceback (most recent call last): 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/threadgroup.py", line 117, in wait 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup x.wait() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/threadgroup.py", line 49, in wait 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup return self.thread.wait() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, in wait 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup return self._exit_event.wait() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup return hubs.get_hub().switch() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup return self.greenlet.switch() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup result = function(*args, **kwargs) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/service.py", line 450, in run_service 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup service.start() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 179, in start 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup self._start() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 205, in _start 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup self._app = kube_app.AppOperator(self.dbapi) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 157, in __init__ 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup self._abort_operation(app, op) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 198, in _abort_operation 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup progress) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 189, in _update_app_status 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup app.update_status(new_status, new_progress) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1415, in update_status 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup self._kube_app.save() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/objects/base.py", line 128, in wrapper 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup objtype=self.obj_name()) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup OrphanedObjectError: Cannot call save on orphaned KubeApp object Any idea about it, or whether this solution is feasible or not. Now I work on perform_apply_abort, tomorrow I will update to you. And for sysinv-api exit, it should doesn't matter. As sysinv-conductor takes the task to make application apply and status update in database, sysinv-api just polling database with application-list BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Ngo, Tee [mailto:Tee.Ngo at windriver.com] Sent: Tuesday, June 4, 2019 9:27 PM To: Bailey, Henry Albert (Al) ; Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: RE: fix solution proposal for LP1826047 It should be a combination of 2 solutions as solution #2 provides the option to bail while sysinv processes are running a) application thread stalled (e.g. during the cleanup of (test) mariadb pods/stx-openstack namespace issue seen in the past and the system became sluggish) b) client simply wants to abort the operation Solution #2 is more involved than just updating the app state in the database. Tee From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: June-04-19 8:52 AM To: Chen, Haochuan Z; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] fix solution proposal for LP1826047 Solution 1 sounds like how Openstack Heat handles an abnormal termination of the heat-engine (how it needs to properly clean up its stack locks) Solution 2 is similar to what I have seen for nova and cinder CLI options. Personally, I like option 1. Al From: Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] Sent: Tuesday, June 04, 2019 1:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] fix solution proposal for LP1826047 Hi I am check this launch pad issue. This issue is failed, when system application apply(retrieving docker image), sysinv-api exit unexpected. When service manager re-launch sysinv-api again, application status is in applying status. So application could not be removed. There are two solutions. Solution 1, when sysinv-api or sysinv-conductor launch, in __init__ function, check application status in database, if status is "uploading", "applying" or "removing", change the status to "upload-failed", "apply-failed" or "removed-failed" Solution 2, add perform-abort action for upload or apply. Use a flag to quickly exit and upload and apply action, and set database to "upload-failed" or "apply-failed". https://bugs.launchpad.net/starlingx/+bug/1826047 Wait for brain-storming. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jun 5 13:32:23 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 5 Jun 2019 13:32:23 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/5 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F827CA@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 6/5 meeting: 1. Stx.2.0 storyboard review (Yong/Saul) - [Enhancement] Upgrade version for influxdb (https://storyboard.openstack.org/#!/story/2003357): patch (https://review.opendev.org/#/c/661668/) was submitted. Test undergoing, deployment issue found under debug (root cause found) and it can be fixed. Still committed to get this SB in before MS3. Most of the sb are valid so they pushed out to stx.3.0. Erich also pushed out one of his SB to 3.0 2. Ceph upgrade update a. Remaining test cases (Fernando) - Test case tracked: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1145711595 b. Remaining LP (Ovidiu/Daniel/Tingjie/Martin) - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage Tingjie: 3 issues already have patches, with 1 pending investigation Martin: 2 issues to be reproduced Daniel: working on SB will switch to LP after that. Ovidiu: 4 LP assigned. 3. QAT upgrade update a. Test status (Ricard/Haitao) - Test cases tracked: https://docs.google.com/spreadsheets/d/1TEjHB5JCYKFUwf2TM4D23OIA-9huYhGZrokZABJP2Lg/edit#gid=118299536 Shuicheng: found the deployment failure root cause, due to the file format changes in the latest ISO. Nova container was updated with modification of PCI pass through. email sent to Ricardo regarding the new instructions of PCI pass through. Other difference: embedded QAT devices was used in SH and Ricardo was using the PCI-e QAT card. 4. Kernel upgrade status to 3.10.0-957.12.2 (Haitao) - patches uploaded: https://review.opendev.org/#/q/status:open+branch:master+topic:Bug/1830487 Test status: VE deployment has been done and Simplex on BM was successful. 5. Opens (all) - None _____________________________________________ From: Xie, Cindy Sent: Wednesday, June 5, 2019 2:24 PM To: 'starlingx-discuss at lists.starlingx.io' ; Wold, Saul ; 'Rowsell, Brent' Cc: Hu, Yong ; Hernandez Gonzalez, Fernando ; Chen, Tingjie ; Chen, Haochuan Z ; Perez, Ricardo O ; Wang, Hai Tao Subject: Agenda: Weekly StarlingX non-OpenStack distro meeting, 6/5 Agenda: 1. Stx.2.0 storyboard review (Yong/Saul) 2. Ceph upgrade update a. Remaining test cases (Fernando) b. Remaining LP (Tingjie/Martin) 3. QAT upgrade update a. Test status (Ricard/Haitao) 4. Kernel upgrade status to 3.10.0-957.12.2 (Haitao) Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'starlingx-discuss at lists.starlingx.io'; Wold, Saul; 'Rowsell, Brent' Cc: 'Carlos Cebrian'; 'Waines, Greg'; 'Zhi Zhi2 Chang'; 'Eslimi, Dariush'; Armstrong, Robert H; Jones, Bruce E; Gomez, Juan P; 'Seiler, Glenn'; Chen, Tingjie; Cobbley, David A; Badea, Daniel; Chen, Jacky; Hu, Wei W Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 5, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Bill.Zvonar at windriver.com Wed Jun 5 13:57:31 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 5 Jun 2019 13:57:31 +0000 Subject: [Starlingx-discuss] Launchpad Tags count doesn't match query count... Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A4AC93@ALA-MBD.corp.ad.wrs.com> Wondering if anyone knows why this is... Looking at the query for a given tag, say stx.metal... https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.metal This give 9 bugs, but the count for stx.metal in the Tags box (in the bottom right corner of the page) says 7. A quick look at some tags seems to indicate that the Tags box isn't counting bugs with status INCOMPLETE. Is this the case? Is it intentional? Thanks, Bill... From marcela.a.rosales.jimenez at intel.com Wed Jun 5 14:01:19 2019 From: marcela.a.rosales.jimenez at intel.com (Rosales Jimenez, Marcela A) Date: Wed, 5 Jun 2019 14:01:19 +0000 Subject: [Starlingx-discuss] [MultiOS] Debian reviews and openSUSE packaging wiki Message-ID: Hi team, I sent to gerrit the Debian control and rules files for fault service, you can check them here: https://review.opendev.org/#/q/status:open+project:starlingx/fault+branch:master+topic:multios Also, I started working on the StarlingX packaging for openSUSE wiki: https://wiki.openstack.org/wiki/StarlingX/MultiOS/OpenSUSE Let me know of any feedback and comments! Thanks. Marcela -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Wed Jun 5 15:08:36 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 5 Jun 2019 15:08:36 +0000 Subject: [Starlingx-discuss] Community activity dashboard In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC0A41529@ALA-MBD.corp.ad.wrs.com> References: <469af93d-2d15-0043-1931-81a66be2278e@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC0A41529@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA4AD84C@ALA-MBD.corp.ad.wrs.com> To follow up on the discussion on the call this morning... Here's one example of "Committer" being used rather than author. Jack Ding had helped with upstreaming a lot of our early commits. The biterg page shows him having 120 commits in config, but he's listed as the author of 6: config$ git log --pretty=fuller |grep '^Commit:.*Jack Ding' | wc -l 118 config$ git log --pretty=fuller |grep '^Author:.*Jack Ding' | wc -l 6 -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Thursday, May 30, 2019 11:00 AM To: Thierry Carrez; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Community activity dashboard Hi Thierry, sorry for the late response - would you be interested in coming to next week's Community meeting [1][2] to give us an overview of the changes? I'm guessing that'll help generate more awareness and some comments from the community. Bill... [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190605T1400 -----Original Message----- From: Thierry Carrez Sent: Tuesday, May 14, 2019 10:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Community activity dashboard Hi StarlingXers, I just pushed some improvements to the default dashboard at: https://starlingx.biterg.io/ Instead of showing git commit activity (which tends to introduce significant skew when repositories are forked or reused), it's now tracking development activity using merged Gerrit changes, which is much more accurate (and also a more comparable metric to what we use in OpenStack). The "key metrics" numbers on the top-left can be used in conjunction with the date range selection (on the top right) to extract yearly activity numbers or per-release activity numbers. Finally I added three panels at the bottom that show the monthly evolution in corporate diversity for proposed changes, code reviews and ML posts, which is I think a good way to track our progress there. NB: The large number of "unknown" reviews on the middle bottom graph is a glitch that should disappear soon (Zuul review comments counting as unknown instead of being ignored). Comments welcome ! -- Thierry Carrez (ttx) _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed Jun 5 15:33:48 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 5 Jun 2019 11:33:48 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 129 - Failure! In-Reply-To: <1114504760.13.1559701435715.JavaMail.javamailuser@localhost> References: <1114504760.13.1559701435715.JavaMail.javamailuser@localhost> Message-ID: <0b464033-b242-5293-9aa5-994cf79986e5@windriver.com> Upstream mock was upgraded last night, and has switched to python3. This is causing breakage for new docker build environments. https://bugs.launchpad.net/starlingx/+bug/1831768 I recommend folks try to keep reusing your existing docker build environments while I get this sorted out. Scott On 2019-06-04 10:23 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_master > Build #: 129 > Status: Failure > Timestamp: 20190605T013000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190605T013000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jun 5 15:57:17 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 5 Jun 2019 15:57:17 +0000 Subject: [Starlingx-discuss] Community Call (June 5, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A4AD34@ALA-MBD.corp.ad.wrs.com> Notes & actions from today's call. Bill... Bitergia Community Activity Dashboard (Thierry Carrez) - https://starlingx.biterg.io/ - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-May/004526.html - Overview + Top Row of Dashboard - proof of concept - started in January, valid for one year - need to decide by ~November if they'll continue - our feedback is important - Kata Containers finds it useful - generally using Gerrit data rather than git - it's more accurate than git - not importing previous activity - only counting contributions from StarlingX - Middle Row: Contributors - helps see if a company's contributions are getting prioritized - Frank & Don asked if the changes are based on owner vs. author - we think author makes more sense - Thierry said it currently uses owner, but can be changed, and he also needs to make sure that Bitergia's using the right field (regardless of what the field is *called* in Bitergia) - ACTION: Don to provide examples so Thierry can follow up - ACTION: Bill follow up with Thierry on this (and other points from this meeting) - Middle Row: Organizations - only as accurate as the affiliation datta in gerrit - Bart asked if we can include a mix of commits & reviews, like stackalytics does - Thierry said he can change the default dashboard to include reviews too - Bottom Row - Corporate diversity evolution: proposed changes - Corporate diversity evolution: code reviews - Corporate diversity evolution: ML posts - helps us see if we are attracting new orgs - take the current month column w/ a grain of salt, since it's not a full month - Other - Scott asked about using github as a source - Thierry said we could add, but it doesn't quite jive currently, we may not like how the data's rolled up - ACTION: Scott & Dean to send details about the repos that we have on github TSC Election - per Ildiko's email, we won't have an actual election - the 3 candidates will be acclaimed and will be the new TSC members - Dean stressed that Community members should still feel like they can ask questions to the candidates (any questions they might have asked if there was an actual campaign) CFP Deadlines - 3 events coming up, per Ildiko's email - June 16 - ONS Europe - https://events.linuxfoundation.org/events/open-networking-summit-europe-2019/program/cfp/ - June 22 - OpenInfra Days Nordic - https://www.papercall.io/oidn-stockholm-2019 - July 2 - Open Infrastructure Summit Shanghai - https://www.openstack.org/summit/shanghai-2019&_eboga=204325035.1551339844 - in particular, anyone want to do a StarlingX overview at the Nordic summit? - OIS Shanghai is closing in under a month! (July 2) - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/004851.html Release Team Update - Ghada's reminder to the PLs re: StoryBoards - still 25 active dev StoryBoards! - 9 active StoryBoards that we need a story on - will scrub in tomorrow's meeting - for tomorrow's meeting - Ghada: focus on features & exception list - Bruce: exceptions! - Bill: review Ada's test report (total/executed/pass/fail) Bug Counts - Bill asked if anyone knows why the Tags box count is different from the query - nobody was able to say for sure - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/004854.html First Contact SIG - tomorrow @9:30, see https://etherpad.openstack.org/p/stx-first-contact Previous Actions - sanity - still not boring - Christopher mentioned bug https://bugs.launchpad.net/starlingx/+bug/1829941 - we talked about where one can see the current sanity issues - it's some combination of these - the stx-sanity tag - the bug importance (only Critical bugs are likely blocking sanity) - the bugs listed on the test team's sanity bugs page: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack - big files - the xv/split workaround is cumbersome - Chris Winnicki is doing this, it's not great - he thinks the LP attachment max size is ~60M - must add each file one at a time - our log file could be 1G - so lots of attachments & clicking - Launchpad is owned by Canonical - so the Foundation can't make changes itself - Dean suggested possibly using http://paste.openstack.org/ - we essentially don't have a Dropbox equivalent that we can use - Scott asked afterwards if we could get extra space at CENGN? - so, generally unresolved - leaving this open for another day - wrsroot - Dean asked Numan about the plan for testing the sysadmin change - Numan said they have the build now & still planning for June 10 -----Original Message----- From: Zvonar, Bill Sent: Wednesday, June 5, 2019 6:38 AM To: starlingx-discuss at lists.starlingx.io Subject: Community Call (June 5, 2019) Reminder of today's Community call - topics include... - an overview of the Community activity dashboard - MS-3 planning - what do we need to go deep on in tomorrow's release planning meeting - bug count / resolution forecast - TSC election recap Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso= 20190605T1400 From Tee.Ngo at windriver.com Wed Jun 5 16:02:34 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Wed, 5 Jun 2019 16:02:34 +0000 Subject: [Starlingx-discuss] fix solution proposal for LP1826047 In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D856225127@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D856223DBA@CDSMSX102.ccr.corp.intel.com> <80ED4CE81E3D8F4099306648E95DAFE44CA2985E@ALA-MBD.corp.ad.wrs.com> <56829C2A36C2E542B0CCB9854828E4D856225127@CDSMSX102.ccr.corp.intel.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE44CA2BC29@ALA-MBD.corp.ad.wrs.com> Hi, I think the likely reason you got the OrphanedObjectError exception is because you were trying to save new app state to the database without a context. Other observations: - Checking if kubernetes is enabled is unnecessary. Kubernetes is always enabled in StarlingX. - By default, enabled logging levels are INFO and higher - Currently only helm chart based apps can be managed via system application commands. - The rpc_app name is not suitable as the app object did not come from RPC call. Tee From: Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] Sent: June-05-19 9:29 AM To: Ngo, Tee; Bailey, Henry Albert (Al); starlingx-discuss at lists.starlingx.io Subject: fix solution proposal for LP1826047 Hi Ngo & Bailey Today I add these lines in stx-config/sysinv/sysinv/sysinv/sysinv/conductor/kube_app.py, function __init__ system = dbapi.isystem_get_one() if system.capabilities.get('kubernetes_enabled', False): LOG.error("kubernetes_enabled") for rpc_app in dbapi.kube_app_get_all(): app = AppOperator.Application(rpc_app, rpc_app.get('name') in self._helm.get_helm_applications()) op = "" if app.status == constants.APP_APPLY_IN_PROGRESS: op = constants.APP_APPLY_OP elif app.status == constants.APP_UPLOAD_IN_PROGRESS: op = constants.APP_UPLOAD_OP elif app.status == constants.APP_REMOVE_IN_PROGRESS: op = constants.APP_REMOVE_OP if op: self._abort_operation(app, op) LOG.info("reset %s to %s failed" % (app.name, op)) But it will fail with 2019-06-05 07:54:11.549 1987918 WARNING ceph_client [-] skip checking server certificate 2019-06-05 07:54:12.055 1987918 ERROR sysinv.conductor.kube_app [-] kubernetes_enabled 2019-06-05 07:54:12.062 1987918 ERROR sysinv.openstack.common.threadgroup [-] Cannot call save on orphaned KubeApp object 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup Traceback (most recent call last): 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/threadgroup.py", line 117, in wait 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup x.wait() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/threadgroup.py", line 49, in wait 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup return self.thread.wait() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, in wait 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup return self._exit_event.wait() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup return hubs.get_hub().switch() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup return self.greenlet.switch() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup result = function(*args, **kwargs) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/service.py", line 450, in run_service 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup service.start() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 179, in start 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup self._start() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 205, in _start 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup self._app = kube_app.AppOperator(self.dbapi) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 157, in __init__ 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup self._abort_operation(app, op) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 198, in _abort_operation 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup progress) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 189, in _update_app_status 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup app.update_status(new_status, new_progress) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py", line 1415, in update_status 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup self._kube_app.save() 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup File "/usr/lib64/python2.7/site-packages/sysinv/objects/base.py", line 128, in wrapper 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup objtype=self.obj_name()) 2019-06-05 07:54:12.062 1987918 TRACE sysinv.openstack.common.threadgroup OrphanedObjectError: Cannot call save on orphaned KubeApp object Any idea about it, or whether this solution is feasible or not. Now I work on perform_apply_abort, tomorrow I will update to you. And for sysinv-api exit, it should doesn't matter. As sysinv-conductor takes the task to make application apply and status update in database, sysinv-api just polling database with application-list BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Ngo, Tee [mailto:Tee.Ngo at windriver.com] Sent: Tuesday, June 4, 2019 9:27 PM To: Bailey, Henry Albert (Al) ; Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: RE: fix solution proposal for LP1826047 It should be a combination of 2 solutions as solution #2 provides the option to bail while sysinv processes are running a) application thread stalled (e.g. during the cleanup of (test) mariadb pods/stx-openstack namespace issue seen in the past and the system became sluggish) b) client simply wants to abort the operation Solution #2 is more involved than just updating the app state in the database. Tee From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: June-04-19 8:52 AM To: Chen, Haochuan Z; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] fix solution proposal for LP1826047 Solution 1 sounds like how Openstack Heat handles an abnormal termination of the heat-engine (how it needs to properly clean up its stack locks) Solution 2 is similar to what I have seen for nova and cinder CLI options. Personally, I like option 1. Al From: Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] Sent: Tuesday, June 04, 2019 1:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] fix solution proposal for LP1826047 Hi I am check this launch pad issue. This issue is failed, when system application apply(retrieving docker image), sysinv-api exit unexpected. When service manager re-launch sysinv-api again, application status is in applying status. So application could not be removed. There are two solutions. Solution 1, when sysinv-api or sysinv-conductor launch, in __init__ function, check application status in database, if status is "uploading", "applying" or "removing", change the status to "upload-failed", "apply-failed" or "removed-failed" Solution 2, add perform-abort action for upload or apply. Use a flag to quickly exit and upload and apply action, and set database to "upload-failed" or "apply-failed". https://bugs.launchpad.net/starlingx/+bug/1826047 Wait for brain-storming. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 5 16:23:58 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 Jun 2019 16:23:58 +0000 Subject: [Starlingx-discuss] Community activity dashboard In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA4AD84C@ALA-MBD.corp.ad.wrs.com> References: <469af93d-2d15-0043-1931-81a66be2278e@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC0A41529@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA4AD84C@ALA-MBD.corp.ad.wrs.com> Message-ID: <20190605162357.jpbv7bkbo3d6bkeo@yuggoth.org> On 2019-06-05 15:08:36 +0000 (+0000), Penney, Don wrote: > To follow up on the discussion on the call this morning... > > Here's one example of "Committer" being used rather than author. > Jack Ding had helped with upstreaming a lot of our early commits. > The biterg page shows him having 120 commits in config, but he's > listed as the author of 6: > > config$ git log --pretty=fuller |grep '^Commit:.*Jack Ding' | wc -l > 118 > config$ git log --pretty=fuller |grep '^Author:.*Jack Ding' | wc -l > 6 [...] Keep in mind that Gerrit authenticates committers, not authors, and so can only enforce a CLA or the DCO by mapping the committer's identity to a specific Gerrit account. As a result, pushing a change for another author may circumvent legal protections, so this behavior is generally discouraged and should ideally come with some manual confirmation and affirmation the author really has agreed to whatever legal paperwork the project requires whenever an exception is made. As an aside, the Gerrit account to push the first patch set of a new change is set as the "owner" of that change in Gerrit. This is the value we use when generating electoral rolls, sending event discounts, or building lists of contributors for release announcements. There's a good chance this is also what Bitgeria is using under the hood to associate changes with contributors for the report. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From juan.carlos.alonso at intel.com Wed Jun 5 19:53:19 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 5 Jun 2019 19:53:19 +0000 Subject: [Starlingx-discuss] Why the cores to use is limited to 20 if my hosts have 64 logical cores Message-ID: <8557B550001AFB46A43A0CCC314BF85168759C6F@FMSMSX108.amr.corp.intel.com> Hi, I have a Standard 2+2 system, I wanted to create 3 VMs with different vcpus each one. VCPUs were set on flavors: $ openstack flavor create --ram ${ram} --disk ${disk} --vcpus 4 flavor-1 $ openstack flavor create --ram ${ram} --disk ${disk} --vcpus 10 flavor-2 $ openstack flavor create --ram ${ram} --disk ${disk} --vcpus 14 flavor-3 I am using cpu_policy=dedicated on flavors. I could launch 2 VMs, with flavor-1 (--vcpus 4) and flavor-2 (--vcpus 10). When tried to launch VM3 (--vcpus 14) I got the following error: Quota exceeded for cores: Requested 14, but already used 14 of 20 cores (HTTP 403) (Request-ID: req-9ab18649-8796-4cad-8768-75f895ac48c1) Why the cores to use is limited to 20 if my hosts have 64 logical cores? Are just 20 vcpus configured to be used by default? How can I enable the rest of logical CPUs to use them? $ system host-cpu-list compute-0 +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ | uuid | log_c | processor | phy_c | thread | processor_model | assigned_function | | | ore | | ore | | | | +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ | e6f9eda4-f8be-48a3-a54c-c052cb1403e4 | 0 | 0 | 0 | 0 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | Platform | | 23522549-88c8-4d76-b665-18edc5e1b5e2 | 1 | 0 | 1 | 0 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | vSwitch | | 57cc68fe-ff4a-40c5-8fe4-7b34269c1387 | 2 | 0 | 2 | 0 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | vSwitch | |... |... | a76df976-4deb-4b4a-828a-0764ca5ddee6 | 62 | 1 | 14 | 1 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | Applications | | 85263ef8-6060-4165-9159-db9f32efc620 | 63 | 1 | 15 | 1 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | Applications | +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ $ kubectl describe nodes compute-0 ... Capacity: cpu: 64 ... Allocatable: cpu: 64 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Wed Jun 5 20:07:50 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Wed, 5 Jun 2019 20:07:50 +0000 Subject: [Starlingx-discuss] [Containers] Feature - Provision dbmon for AIO-DX and DC systemcontroller Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A97F61C@fmsmsx101.amr.corp.intel.com> Hi Bin I'm checking this patch [1] for dbmon on AIO-DX, and I'm seeing that after application apply that service is show as below: ---- ACTIVE CONTROLLER ----- wrsroot at controller-0 ~(keystone_admin)]$ sudo sm-dump --verbose |grep dbmon dbmon enabled-active unknown failed action-failure ---- STANDBY CONTROLLER ----- controller-1:~$ sudo sm-dump --verbose |grep dbmon dbmon enabled-standby unknown failed action-failure Is that expected, I see that is actually `enabled-active` shall we rely on this filed because I don't know what other fields (unknown, failed, action-failure ) means and if this is expected or there is actually a failure. 1- https://review.opendev.org/#/c/650455/ Regards, José From Brent.Rowsell at windriver.com Wed Jun 5 20:28:02 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 5 Jun 2019 20:28:02 +0000 Subject: [Starlingx-discuss] Why the cores to use is limited to 20 if my hosts have 64 logical cores In-Reply-To: <8557B550001AFB46A43A0CCC314BF85168759C6F@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85168759C6F@FMSMSX108.amr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB4FACAC@ALA-MBD.corp.ad.wrs.com> Check your vcpu quota setting Brent From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Wednesday, June 5, 2019 3:53 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Why the cores to use is limited to 20 if my hosts have 64 logical cores Hi, I have a Standard 2+2 system, I wanted to create 3 VMs with different vcpus each one. VCPUs were set on flavors: $ openstack flavor create --ram ${ram} --disk ${disk} --vcpus 4 flavor-1 $ openstack flavor create --ram ${ram} --disk ${disk} --vcpus 10 flavor-2 $ openstack flavor create --ram ${ram} --disk ${disk} --vcpus 14 flavor-3 I am using cpu_policy=dedicated on flavors. I could launch 2 VMs, with flavor-1 (--vcpus 4) and flavor-2 (--vcpus 10). When tried to launch VM3 (--vcpus 14) I got the following error: Quota exceeded for cores: Requested 14, but already used 14 of 20 cores (HTTP 403) (Request-ID: req-9ab18649-8796-4cad-8768-75f895ac48c1) Why the cores to use is limited to 20 if my hosts have 64 logical cores? Are just 20 vcpus configured to be used by default? How can I enable the rest of logical CPUs to use them? $ system host-cpu-list compute-0 +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ | uuid | log_c | processor | phy_c | thread | processor_model | assigned_function | | | ore | | ore | | | | +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ | e6f9eda4-f8be-48a3-a54c-c052cb1403e4 | 0 | 0 | 0 | 0 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | Platform | | 23522549-88c8-4d76-b665-18edc5e1b5e2 | 1 | 0 | 1 | 0 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | vSwitch | | 57cc68fe-ff4a-40c5-8fe4-7b34269c1387 | 2 | 0 | 2 | 0 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | vSwitch | |... |... | a76df976-4deb-4b4a-828a-0764ca5ddee6 | 62 | 1 | 14 | 1 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | Applications | | 85263ef8-6060-4165-9159-db9f32efc620 | 63 | 1 | 15 | 1 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | Applications | +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ $ kubectl describe nodes compute-0 ... Capacity: cpu: 64 ... Allocatable: cpu: 64 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Wed Jun 5 21:23:33 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 5 Jun 2019 21:23:33 +0000 Subject: [Starlingx-discuss] Why the cores to use is limited to 20 if my hosts have 64 logical cores In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB4FACAC@ALA-MBD.corp.ad.wrs.com> References: <8557B550001AFB46A43A0CCC314BF85168759C6F@FMSMSX108.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB4FACAC@ALA-MBD.corp.ad.wrs.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85168759CF7@FMSMSX108.amr.corp.intel.com> In case you want to change some value, here is the openstack documentation: https://docs.openstack.org/python-openstackclient/stein/cli/command-objects/quota.html Regards. Juan Carlos Alonso From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, June 5, 2019 3:28 PM To: Alonso, Juan Carlos ; starlingx-discuss at lists.starlingx.io Subject: RE: Why the cores to use is limited to 20 if my hosts have 64 logical cores Check your vcpu quota setting Brent From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Wednesday, June 5, 2019 3:53 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Why the cores to use is limited to 20 if my hosts have 64 logical cores Hi, I have a Standard 2+2 system, I wanted to create 3 VMs with different vcpus each one. VCPUs were set on flavors: $ openstack flavor create --ram ${ram} --disk ${disk} --vcpus 4 flavor-1 $ openstack flavor create --ram ${ram} --disk ${disk} --vcpus 10 flavor-2 $ openstack flavor create --ram ${ram} --disk ${disk} --vcpus 14 flavor-3 I am using cpu_policy=dedicated on flavors. I could launch 2 VMs, with flavor-1 (--vcpus 4) and flavor-2 (--vcpus 10). When tried to launch VM3 (--vcpus 14) I got the following error: Quota exceeded for cores: Requested 14, but already used 14 of 20 cores (HTTP 403) (Request-ID: req-9ab18649-8796-4cad-8768-75f895ac48c1) Why the cores to use is limited to 20 if my hosts have 64 logical cores? Are just 20 vcpus configured to be used by default? How can I enable the rest of logical CPUs to use them? $ system host-cpu-list compute-0 +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ | uuid | log_c | processor | phy_c | thread | processor_model | assigned_function | | | ore | | ore | | | | +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ | e6f9eda4-f8be-48a3-a54c-c052cb1403e4 | 0 | 0 | 0 | 0 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | Platform | | 23522549-88c8-4d76-b665-18edc5e1b5e2 | 1 | 0 | 1 | 0 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | vSwitch | | 57cc68fe-ff4a-40c5-8fe4-7b34269c1387 | 2 | 0 | 2 | 0 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | vSwitch | |... |... | a76df976-4deb-4b4a-828a-0764ca5ddee6 | 62 | 1 | 14 | 1 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | Applications | | 85263ef8-6060-4165-9159-db9f32efc620 | 63 | 1 | 15 | 1 | Intel(R) Xeon(R) Gold 6142M CPU @ 2.60GHz | Applications | +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ $ kubectl describe nodes compute-0 ... Capacity: cpu: 64 ... Allocatable: cpu: 64 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Jun 5 22:02:11 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 5 Jun 2019 22:02:11 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 6/5/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B53223@ORSMSX103.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Jun 5 23:40:28 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 5 Jun 2019 23:40:28 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190605 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-05 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs | 1 TCs FAIL Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 1 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 1 TCs FAIL Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs | 19 TCs FAIL Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs | 16 TCs FAIL Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs | 20 TCs FAIL Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Intermittent ssh connection timeout failure. https://bugs.launchpad.net/starlingx/+bug/1831807 Unable to get version from keystone URL https://bugs.launchpad.net/starlingx/+bug/1831809 Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jun 6 02:08:48 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 5 Jun 2019 22:08:48 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 292 - Still Failing! In-Reply-To: <617339381.8.1559701427441.JavaMail.javamailuser@localhost> References: <617339381.8.1559701427441.JavaMail.javamailuser@localhost> Message-ID: <2071886384.16.1559786929580.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 292 Status: Still Failing Timestamp: 20190606T020433Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190606T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190606T013000Z DOCKER_BUILD_ID: jenkins-master-20190606T013000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190606T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190606T013000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Thu Jun 6 02:08:52 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 5 Jun 2019 22:08:52 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 130 - Still Failing! In-Reply-To: <394962037.11.1559701433028.JavaMail.javamailuser@localhost> References: <394962037.11.1559701433028.JavaMail.javamailuser@localhost> Message-ID: <1724086807.19.1559786933057.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 130 Status: Still Failing Timestamp: 20190606T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190606T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From zhipengs.liu at intel.com Thu Jun 6 08:31:03 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 6 Jun 2019 08:31:03 +0000 Subject: [Starlingx-discuss] About task of integrating containerized Nova Placement into StarlingX References: <93814834B4855241994F290E959305C75307C9D9@SHSMSX104.ccr.corp.intel.com> <58CF5BABC9A76946A638A0E8AE48D173718469EE@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C75308BD80@SHSMSX104.ccr.corp.intel.com> Hi Bruce and all, Below is some update for openstack-placement containerized task. The placement API works now. I have tested application apply and VM creation. Both are pass! Besides that, I will do some basic test on multi-node as well. I’d like to seek support from nova experts about how to test nova-placement in depth. If possible, can we involve test team to help do sanity test before getting these patches merged. 4 below patches submitted for review (662229 is openstack-helm upstream patch) 1) Add stx-placement docker image directives files https://review.opendev.org/#/c/661679/ 2) WIP: add placement chart ( submitted to openstack-helm project) https://review.opendev.org/#/c/662229/ 3) Add placement chart patch to openstack-helm https://review.opendev.org/#/c/662371/ 4) Add placement chart to armada system https://review.opendev.org/#/c/662614/ Your comments are appreciated! Thanks! Zhipeng From: Liu, ZhipengS Sent: 2019年5月31日 8:55 To: 'Kopec, Gerald (Gerry)' Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] About task of integrating containerized Nova Placement into StarlingX Hi Gerry, Thanks for your great comments! For step 2, I already submitted SB and a patch to openstack-helm project https://review.opendev.org/#/c/662229/ Meanwhile, I will add these charts as a patch to openstack-helm. For step 3, I will submit 1 patch of adding placement chart to armada system For step 4, I will submit 1 patch to openstack-helm to change nova chart to turn off the placement of nova. Thanks! Zhipeng From: Kopec, Gerald (Gerry) [mailto:Gerry.Kopec at windriver.com] Sent: 2019年5月31日 6:41 To: Liu, ZhipengS > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] About task of integrating containerized Nova Placement into StarlingX Hi Zhipeng, Some initial thoughts below… Gerry From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Tuesday, May 28, 2019 10:07 AM To: Kopec, Gerald (Gerry) Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] About task of integrating containerized Nova Placement into StarlingX Hi Gerry, I'm working on the task of integrating containerized Nova Placement into StarlingX https://storyboard.openstack.org/#!/story/2005750 I really need your guide for the task. I already had high level understanding for the task. It should include below steps: 1) Create new container image for Placement https://review.opendev.org/#/c/661679 Done 2) Add Helm Chart related yaml configuration for Placement(need time for detail digest) Need to move related configuration out of Nova, and add some new configurations Now, I'm Working on the patch of adding chart for placement Created below folder stx-config/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/openstack-placement Then I will create related chart yaml files. I mainly refer to the ones in openstack-helm/nova/ [GK] Don’t think this should go in starlingx. Should be with openstack-helm. Looks like you’ve got that started with https://review.opendev.org/#/c/662229/. It would be based on what is in openstack-helm/nova related to placement. For initial deployment within starlingx, you’ll probably have to add a patch with your changes to: stx-upstream/openstack/openstack-helm/files/ 3) Start placement container in controller and make sure placement service started successfully. [GK] You’ll need to add placement to the armada manifest so it will get started on stx-openstack application-apply. I would assume it would go with the compute-kit. See: stx-config/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/manifests/manifest.yaml Also you’ll need to setup up dynamic overrides for placement in sysinv/helm, see nova example: stx-config/sysinv/sysinv/sysinv/sysinv/helm/nova.py Placement needs to be included in sysinv generated helm override yaml files and provided as input to armada on application-apply. 4) Do related modification for current Nova yaml file. One patch for modifying chart for nova. [GK] I’m hoping it’s possible to turn off the placement capability in openstack-helm/nova via overrides, but maybe a openstack-helm nova chart change will be required. 5) Integrate to starlingx and start separate container. Debug and Sanity test. [GK] You’ll also need to include the placement helm chart in the build helm chart tarball. It should be OK to finish related patches in next 3 weeks. But for code review/merge, it may have some risk. Your comments are appreciated! Thanks! Zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From pvmpublic at gmail.com Thu Jun 6 12:28:23 2019 From: pvmpublic at gmail.com (Pratik M.) Date: Thu, 6 Jun 2019 17:58:23 +0530 Subject: [Starlingx-discuss] mirror.starlingx.cengn.ca reachability Message-ID: Hi, I cannot reach mirror.starlingx.cengn.ca. Checked from two ISPs in India. But seems to be reachable from others. From below, it seems that the site is not reachable from many ISPs/locations (Frankfurt, Toronto etc.). https://tools.keycdn.com/ping (8 out of 14 fails) and https://lg.he.net/ core1.sea1.he.net, > ping 135.84.104.40 numeric count 5 Sending 5, 16-byte ICMP Echo to 135.84.104.40, timeout 5000 msec, TTL 64 Request timed out. [...] core1.fra1.he.net> ping 135.84.104.40 numeric count 5 Sending 5, 16-byte ICMP Echo to 135.84.104.40, timeout 5000 msec, TTL 64 Request timed out. [...] core1.syd1.he.net> ping 135.84.104.40 numeric count 5 Sending 5, 16-byte ICMP Echo to 135.84.104.40, timeout 5000 msec, TTL 64 Reply from 135.84.104.40 : bytes=16 time=216ms TTL=55 Here is a traceroute from a failed ping: $ tracert -d mirror.starlingx.cengn.ca Tracing route to mirror.starlingx.cengn.ca [135.84.104.40] over a maximum of 30 hops: 3 120 ms 139 ms 203 ms 10.50.112.57 4 108 ms 118 ms 140 ms 10.61.37.33 5 60 ms 54 ms 94 ms 125.22.219.5 6 211 ms 230 ms 232 ms 182.79.152.160 7 229 ms 196 ms 222 ms 63.218.107.193 8 576 ms 785 ms 977 ms 63.218.4.234 9 520 ms 645 ms 549 ms 63.218.4.234 10 678 ms 586 ms 548 ms 209.8.108.158 11 499 ms 568 ms 615 ms 209.148.237.14 12 553 ms 582 ms 596 ms 209.148.229.230 13 484 ms 515 ms 550 ms 209.148.249.217 14 535 ms 587 ms 643 ms 209.148.251.93 15 * 440 ms 379 ms 209.148.251.82 16 389 ms 418 ms 444 ms 207.107.79.178 17 207.107.79.178 reports: Destination net unreachable. Thanks Pratik From Bin.Qian at windriver.com Thu Jun 6 12:30:58 2019 From: Bin.Qian at windriver.com (Qian, Bin) Date: Thu, 6 Jun 2019 12:30:58 +0000 Subject: [Starlingx-discuss] [Containers] Feature - Provision dbmon for AIO-DX and DC systemcontroller In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A97F61C@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A97F61C@fmsmsx101.amr.corp.intel.com> Message-ID: Hi José, When you checked the patch, did you also pick the 2 patches [1] and [2] list as depends-on? Your sm-dump list the following columns: service-name desired-state state status condition dbmon enabled-active unknown failed action-failure dbmon enabled-standby unknown failed action-failure Please note that a bug [3] is being fixed. Also storyboard [4] enhanced the behavior for dbmon. [1] https://review.opendev.org/#/c/650301/ [2] https://review.opendev.org/#/c/650288/ [3] https://bugs.launchpad.net/starlingx/+bug/1826891 [4] https://storyboard.openstack.org/#!/story/2005486 Thanks, Bin ________________________________________ From: Perez Carranza, Jose [jose.perez.carranza at intel.com] Sent: Wednesday, June 05, 2019 1:07 PM To: Qian, Bin; 'starlingx-discuss at lists.starlingx.io' Subject: [Containers] Feature - Provision dbmon for AIO-DX and DC systemcontroller Hi Bin I'm checking this patch [1] for dbmon on AIO-DX, and I'm seeing that after application apply that service is show as below: ---- ACTIVE CONTROLLER ----- wrsroot at controller-0 ~(keystone_admin)]$ sudo sm-dump --verbose |grep dbmon dbmon enabled-active unknown failed action-failure ---- STANDBY CONTROLLER ----- controller-1:~$ sudo sm-dump --verbose |grep dbmon dbmon enabled-standby unknown failed action-failure Is that expected, I see that is actually `enabled-active` shall we rely on this filed because I don't know what other fields (unknown, failed, action-failure ) means and if this is expected or there is actually a failure. 1- https://review.opendev.org/#/c/650455/ Regards, José From Anirudh.Gupta at hsc.com Thu Jun 6 04:59:28 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Thu, 6 Jun 2019 04:59:28 +0000 Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Message-ID: Hi Team, I have created All in one simplex setup using release 2018.10. I have spawned 2 VM’s on it. The ping is successful between the VM’s, but I am unable to ssh or run iperf on it. Can you please help me in resolving the issue. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adrien.macor at hotmail.com Thu Jun 6 11:31:52 2019 From: adrien.macor at hotmail.com (Adrien Macor) Date: Thu, 6 Jun 2019 11:31:52 +0000 Subject: [Starlingx-discuss] Edge nodes Message-ID: Hi, I'm not completely sure how to deploy edges sites. First I will first install my central cloud (https://docs.starlingx.io/installation_guide/current/index.html). But, then I didn't find how to install, configure, etc the edges site. By the way, I found this picture: [cid:9c067e6c-e772-4fb8-9dac-fe1a9f8f50a8] and was wondering what is the difference between thoses edges? I understand that differents VNFs requiere differents ressources (hardware), I wanted to know where the differences are explained? Is this link right? Thanks for the help Adrianp -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-1ld0zqzi.png Type: image/png Size: 44267 bytes Desc: Outlook-1ld0zqzi.png URL: From bruce.e.jones at intel.com Thu Jun 6 17:00:03 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 6 Jun 2019 17:00:03 +0000 Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BD076128C@fmsmsx123.amr.corp.intel.com> Hello Anirudh. Thank you for trying out StarlingX! I have snipped some of the mailing lists you used, this one is all you need for StarlingX questions. We don’t yet have trouble-shooting guides posted on line. There are commands you can enter to check on system health, and logs you can capture to help us triage your issue. Hopefully someone smarter than me will jump in here and provide more detail to help you move forward. brucej From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Wednesday, June 5, 2019 9:59 PM To: openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Hi Team, I have created All in one simplex setup using release 2018.10. I have spawned 2 VM’s on it. The ping is successful between the VM’s, but I am unable to ssh or run iperf on it. Can you please help me in resolving the issue. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Jun 6 18:12:27 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 6 Jun 2019 20:12:27 +0200 Subject: [Starlingx-discuss] StarlingX TSC election update and results Message-ID: <12B33AB2-DF7D-4A6B-BE68-4219BC6AA37F@gmail.com> Hi StarlingX Community, I’m reaching out to you with announcements about the first StarlingX TSC election. As the number of candidates[1] doesn’t exceed the number of open seats and the new TSC group fulfills all criteria we have listed on the governance page[2] we will skip the voting period for this election and form the new TSC group. The term of the resigning members ends on the week of June 10th and the new members’ term starts on the week of June 17th and it is approximately one year long. With all that said I would like to announce and also congratulate to the newly elected TSC members: * Dean Troyer * Ian Jolliffe * Wang Hao I would also like to use the opportunity to say thank you for the resigning members Ana Cunha and Miguel Lavalle for all their efforts and work to help the project to form and take important first steps as part of the TSC group. I hope you will stay involved in the project and may run again in elections in the future. Thanks and Best Regards, Ildikó Váncsa Ecosystem Technical Lead, OpenStack Foundation [1] https://opendev.org/starlingx/election/src/branch/master/candidates/2019_H1/tsc [2] https://docs.starlingx.io/governance/reference/tsc/stx_charter.html#elections From Chris.Winnicki at windriver.com Thu Jun 6 21:38:56 2019 From: Chris.Winnicki at windriver.com (Winnicki, Chris) Date: Thu, 6 Jun 2019 21:38:56 +0000 Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm In-Reply-To: References: Message-ID: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE3D9@ALA-MBD.corp.ad.wrs.com> Anirudh, can you provide some details with respect to: 1) How are you pinging from one VM to the other (is it over the graphical console ? or namespace ?) 2) What VM image are you using? - Is the VM image enabled for SSH with password ? (assuming sshd is running) 3) Network topology 4) Are you trying to ssh from one VM to the other or from a different network segment? Is there a virtual router in the picture, etc.. Chris Winnicki chris.winnicki at windriver.com 613-963-1329 ________________________________ From: Anirudh Gupta [Anirudh.Gupta at hsc.com] Sent: Thursday, June 06, 2019 12:59 AM To: openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Hi Team, I have created All in one simplex setup using release 2018.10. I have spawned 2 VM’s on it. The ping is successful between the VM’s, but I am unable to ssh or run iperf on it. Can you please help me in resolving the issue. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Jun 7 00:06:14 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 7 Jun 2019 00:06:14 +0000 Subject: [Starlingx-discuss] Upcoming Milestone-3 for stx.2.0 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA50330F@ALA-MBD.corp.ad.wrs.com> Hello all, We are fast approaching milestone-3 (Feature Freeze) for stx.2.0 which is scheduled for June 14. The starlingx release team and project leads are tracking a few exceptions and items at risk, but we are driving hard to meet this milestone. See list here: https://etherpad.openstack.org/p/stx-releases We need the help of the starlingx core reviewers and technical leads to review the remaining feature code that is planned for MS-3. The list of open stx.2.0 features are: https://storyboard.openstack.org/#!/story/list?status=active&project_group_id=86&tags=stx.2.0 Please give the first priority to code related to the above list in support of the milestone. stx.2.0 bug fixes can continue to go in - especially ones with critical/high priority. For code reviews unrelated to stx.2.0 (code for deferred items to stx.3.0, new stx.3.0 features, enhancements), only passive/disabled code should be merged in master until the stx.2.0 RC1 branch is created (Aug 5). We will leave it to the judgment of the technical leads and core reviewers to determine if code is safe to merge. If you need an opinion, the release planning team is happy to help (myself, Bill and Bruce). Regards, Ghada (on behalf of the starlingx release planning team) References: [0] Milestone criteria: https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Milestones [1] stx.2.0 Milestone Dates: https://docs.google.com/spreadsheets/d/1HUwbsaSerzFRuvXVB_qvoGdI0Chx1YiiA2WYHwvIoYI/edit#gid=0 From cindy.xie at intel.com Fri Jun 7 00:25:25 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 7 Jun 2019 00:25:25 +0000 Subject: [Starlingx-discuss] StarlingX TSC election update and results In-Reply-To: <12B33AB2-DF7D-4A6B-BE68-4219BC6AA37F@gmail.com> References: <12B33AB2-DF7D-4A6B-BE68-4219BC6AA37F@gmail.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F844F7@SHSMSX104.ccr.corp.intel.com> Congratulations to Dean, Ian and Hao! And special welcome to Hao as new TSC member in StarlingX community! Thanks. - cindy -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Friday, June 7, 2019 2:12 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX TSC election update and results Hi StarlingX Community, I’m reaching out to you with announcements about the first StarlingX TSC election. As the number of candidates[1] doesn’t exceed the number of open seats and the new TSC group fulfills all criteria we have listed on the governance page[2] we will skip the voting period for this election and form the new TSC group. The term of the resigning members ends on the week of June 10th and the new members’ term starts on the week of June 17th and it is approximately one year long. With all that said I would like to announce and also congratulate to the newly elected TSC members: * Dean Troyer * Ian Jolliffe * Wang Hao I would also like to use the opportunity to say thank you for the resigning members Ana Cunha and Miguel Lavalle for all their efforts and work to help the project to form and take important first steps as part of the TSC group. I hope you will stay involved in the project and may run again in elections in the future. Thanks and Best Regards, Ildikó Váncsa Ecosystem Technical Lead, OpenStack Foundation [1] https://opendev.org/starlingx/election/src/branch/master/candidates/2019_H1/tsc [2] https://docs.starlingx.io/governance/reference/tsc/stx_charter.html#elections _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Fri Jun 7 04:37:15 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 7 Jun 2019 00:37:15 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 244 - Failure! Message-ID: <696527912.23.1559882236184.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 244 Status: Failure Timestamp: 20190607T043710Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190607T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190607T013000Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190607T013000Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190607T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190607T013000Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Fri Jun 7 04:37:18 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 7 Jun 2019 00:37:18 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 131 - Still Failing! In-Reply-To: <1491386340.17.1559786930325.JavaMail.javamailuser@localhost> References: <1491386340.17.1559786930325.JavaMail.javamailuser@localhost> Message-ID: <1268630762.26.1559882239471.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 131 Status: Still Failing Timestamp: 20190607T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190607T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From ildiko.vancsa at gmail.com Fri Jun 7 06:40:58 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 7 Jun 2019 08:40:58 +0200 Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm In-Reply-To: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE3D9@ALA-MBD.corp.ad.wrs.com> References: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE3D9@ALA-MBD.corp.ad.wrs.com> Message-ID: <5A608B8B-7E10-46C0-9256-783B389CBE0F@gmail.com> Removing some mailing lists that are not relevant for this discussion. Please keep on using starlingx-discuss only. Thanks, Ildikó > On 2019. Jun 6., at 23:38, Winnicki, Chris wrote: > > Anirudh, can you provide some details with respect to: > > 1) How are you pinging from one VM to the other (is it over the graphical console ? or namespace ?) > 2) What VM image are you using? - Is the VM image enabled for SSH with password ? (assuming sshd is running) > 3) Network topology > 4) Are you trying to ssh from one VM to the other or from a different network segment? Is there a virtual router in the picture, etc.. > > > Chris Winnicki > chris.winnicki at windriver.com > 613-963-1329 > From: Anirudh Gupta [Anirudh.Gupta at hsc.com] > Sent: Thursday, June 06, 2019 12:59 AM > To: openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm > > Hi Team, > > I have created All in one simplex setup using release 2018.10. > > I have spawned 2 VM’s on it. > The ping is successful between the VM’s, but I am unable to ssh or run iperf on it. > > Can you please help me in resolving the issue. > > Regards > अनिरुद्ध गुप्ता > (वरिष्ठ अभियंता) > Hughes Systique Corporation > D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 > > DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From thierry at openstack.org Fri Jun 7 09:14:46 2019 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 7 Jun 2019 11:14:46 +0200 Subject: [Starlingx-discuss] Community activity dashboard In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA4AD84C@ALA-MBD.corp.ad.wrs.com> References: <469af93d-2d15-0043-1931-81a66be2278e@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC0A41529@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA4AD84C@ALA-MBD.corp.ad.wrs.com> Message-ID: Penney, Don wrote: > To follow up on the discussion on the call this morning... > > Here's one example of "Committer" being used rather than author. Jack Ding had helped with upstreaming a lot of our early commits. The biterg page shows him having 120 commits in config, but he's listed as the author of 6: > > config$ git log --pretty=fuller |grep '^Commit:.*Jack Ding' | wc -l > 118 > config$ git log --pretty=fuller |grep '^Author:.*Jack Ding' | wc -l > 6 There are actually *three* different things. - The overview page tracks the *Gerrit change owner*. As Jeremy says, this is what we use to check CLA/DCO for contributions, and what we use for electoral rolls as well. So it is our default way to count "contribution". - What you compare above is data from the *git repository commits*, not the Gerrit changes. Git commits have two concepts: Committer and Author. Bitergia allows to track both (and by default in their Data sources/git dashboard show Authors). This data (Git Committer and Author) is more unreliable data compared to Gerrit Owners as you potentially inherit work done outside you community (in case of upstream merges) and duplicate data (in case of repository forks). So in summary, you can totally track Git Authorship with the Bitergia tooling.... and I can add a panel on the Git data source page that will make that easier. But I would strongly recommend against using Git Authors and for using Gerrit Owners to count basic code contribution, as it is a much more reliable metric that happens to match how we use in license compliance and governance elections. -- Thierry Carrez (ttx) From Ian.Jolliffe at windriver.com Fri Jun 7 12:43:48 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Fri, 7 Jun 2019 12:43:48 +0000 Subject: [Starlingx-discuss] CFP reminders In-Reply-To: References: Message-ID: Thanks Ildiko; For the Open Infrastruture Summit - people from the programming committee for the various tracks have posted office hours if people have questions about what topics maybe of interest. The etherpad is here [0] https://etherpad.openstack.org/p/ShanghaiOfficeHours I am one of the volunteers and my hours are as follows: Friday June 7,14 and 21 from 9-10am EST on IRC (ijolliffe) in #open-infra-summit-cfp - I can open a Zoom call as required. Regards; Ian On 2019-06-05, 8:38 AM, "Ildiko Vancsa" wrote: Hi StarlingX Community, I wanted to draw your attention to a few CFP deadlines that are approaching quickly: * June 16 - ONS Europe - https://events.linuxfoundation.org/events/open-networking-summit-europe-2019/program/cfp/ * June 22 - OpenInfra Days Nordic - https://www.papercall.io/oidn-stockholm-2019 * July 2 - Open Infrastructure Summit Shanghai - https://www.openstack.org/summit/shanghai-2019&_eboga=204325035.1551339844 Please let me know if you need help with your session proposals. Thanks, Ildikó _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Fri Jun 7 13:58:04 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 7 Jun 2019 09:58:04 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 245 - Still Failing! In-Reply-To: <1324300186.21.1559882232464.JavaMail.javamailuser@localhost> References: <1324300186.21.1559882232464.JavaMail.javamailuser@localhost> Message-ID: <1760026949.29.1559915885130.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 245 Status: Still Failing Timestamp: 20190607T135800Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190607T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190607T013000Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190607T013000Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190607T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190607T013000Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From Chris.Winnicki at windriver.com Fri Jun 7 14:03:29 2019 From: Chris.Winnicki at windriver.com (Winnicki, Chris) Date: Fri, 7 Jun 2019 14:03:29 +0000 Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm In-Reply-To: References: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE3D9@ALA-MBD.corp.ad.wrs.com>, Message-ID: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE67D@ALA-MBD.corp.ad.wrs.com> Anirudh: Have you tried the workaround mentioned in comment #7 in the bug report ? Are you able to ssh to localhost (ssh to VM itself from within the VM) Chris Winnicki chris.winnicki at windriver.com 613-963-1329 ________________________________ From: Anirudh Gupta [Anirudh.Gupta at hsc.com] Sent: Thursday, June 06, 2019 10:48 PM To: Winnicki, Chris; openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Hi Chris, I am pinging from one VM to another over graphical console, which is successful. But when I try to run ssh or iperf, then there is no success. I am using Ubuntu 16.04 Image, which is ssh enabled and yes the service sshd is running. I have created a flat network as well as vlan network and tried doing ssh/iperf on both, but with no success. There is no virtual router. I am suspecting the issue mentioned in the below bug https://bugs.launchpad.net/starlingx/+bug/1790514 But I have no understanding as why it is happening. Regards Anirudh Gupta (Senior Engineer) From: Winnicki, Chris Sent: 07 June 2019 03:09 To: Anirudh Gupta ; openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Anirudh, can you provide some details with respect to: 1) How are you pinging from one VM to the other (is it over the graphical console ? or namespace ?) 2) What VM image are you using? - Is the VM image enabled for SSH with password ? (assuming sshd is running) 3) Network topology 4) Are you trying to ssh from one VM to the other or from a different network segment? Is there a virtual router in the picture, etc.. Chris Winnicki chris.winnicki at windriver.com 613-963-1329 ________________________________ From: Anirudh Gupta [Anirudh.Gupta at hsc.com] Sent: Thursday, June 06, 2019 12:59 AM To: openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Hi Team, I have created All in one simplex setup using release 2018.10. I have spawned 2 VM’s on it. The ping is successful between the VM’s, but I am unable to ssh or run iperf on it. Can you please help me in resolving the issue. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Jun 7 14:27:08 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 7 Jun 2019 10:27:08 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 131 - Still Failing! In-Reply-To: <1268630762.26.1559882239471.JavaMail.javamailuser@localhost> References: <1491386340.17.1559786930325.JavaMail.javamailuser@localhost> <1268630762.26.1559882239471.JavaMail.javamailuser@localhost> Message-ID: <1dc4a4c4-8f8b-a5c3-3033-16498b694e3a@windriver.com> We got passed the mockchain issue. This time we got hung up on a change to build-helm-charts.sh.  A change was merged to that tool.  Now extra arguments are required (e.g. --app stx-openstack --rpm stx-openstack-helm) A rebuild has be launched Scott On 2019-06-07 12:37 a.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_master > Build #: 131 > Status: Still Failing > Timestamp: 20190607T013000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190607T013000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Jun 7 14:53:11 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 7 Jun 2019 10:53:11 -0400 Subject: [Starlingx-discuss] [build] Changes to build tools Message-ID: A recent change to build-helm-charts.sh broke the CENGN build. New arguments must now be supplied. On behalf of those of us that maintain automated build environments, I have an urgent request 1) If possible, make the change backward compatible.  No changes to build scripts required for 'normal' use cases. 2) Notify the community of the change through the starlingx-discuss list before it goes WF+1    - include the [build] tag in the subject    - if the change is not backward compatible, include [action-required] in the subject    - briefly describe your changes, but with sufficient detail that folks know what changes they need to make to successfully build an iso, docker images, helm charts, etc.    - point to a wiki/documentation where greater details can be found.    - point to the gerrit code review. 3) notify folks through the IRC channel.  Point to the starlingx-discuss posting. From bruce.e.jones at intel.com Thu Jun 6 14:08:53 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 6 Jun 2019 14:08:53 +0000 Subject: [Starlingx-discuss] First Contact SIG kick off meeting - notes Message-ID: <9A85D2917C58154C960D95352B22818BD075CFF9@fmsmsx123.amr.corp.intel.com> We had our first meeting today. The notes from our discussion are below. Thanks to the people who joined! We will meet again on June 20th at 6:30AM PDT (1330 UTC) Agenda (on https://etherpad.openstack.org/p/stx-first-contact) * Matthew from the OpenStack FC SIG joined! * A place for people to get started * Help new contributors get connected to projects and project teams * Worked wtih the Foundation to set up a new Contributor portal ? www.openstack.org/community ? docs.openstack.org/contributors * Noticed that patches from new contributors didn't always get attention * Gerrit bot tells people when their first patch goes in * Monitor IRC and mailing list * ask.openstack.org portal, looking for new contributors * Wiki page should show timezones so people can connect to people near them * Seeing barriers for Chinese contributors even though we see lots of activity there ? Some simple things like knowing which ports to open (IRC, gerrit, etc..) can be helpful * Looking to do outreach at the Shanghai summit * Seeing some companies "bombard" projects with lots of tiny fixes - looking to reach out to them * One of the hardest parts is getting new contributors to submit a 2nd/3rd patch set for their first change * Each project has a liaison who the FC SIG will put on reviews for new contributors * What barriers do you see from the project teams themselves? ? In the past we saw many companies assigning people to be full time contributors but we don't see that any more ? Now we are seeing people who want to work on OpenStack but it isn't part of their job * They need access to bigger machines / clouds which can be hard for them (and the community) * We don't do office hours but we do sessions at the Summit - but we only had a few walk-ins @ Denver * Ask Liaisons to monitor IRC and help answer questions * Scope of the OpenStack FC SIG is both contributors and users/operators - intended to be all inclusive ? All can make important contributions to the project - code, docs, translations, bugs reports/fixes, feedback, etc... * Glenn - I was on a call yesterday with a company that is very interested in getting involved in STX. * Asked them - how do they envision working in StarlingX - they are shy, they are not going to just start contributing code or filing bugs * They want to try to do things themselves - where can I download the code, how do I run the software, how do I file bugs? * We need to continue making the wiki and documents easier to consume for new contributors * Goals and mission - did not cover * Meeting time (this slot is available bi-weekly) * We will continue this bi-weekly in this time slot * Next steps - did not directly cover * What do new contributors need? * How can we help them? * What can we learn / borrow from the OpenStack First Contact SIG? https://wiki.openstack.org/wiki/First_Contact_SIG * How do we manage the work of helping new contributors? * What tools, documents, improvements, etc... are needed? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Fri Jun 7 02:48:45 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Fri, 7 Jun 2019 02:48:45 +0000 Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm In-Reply-To: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE3D9@ALA-MBD.corp.ad.wrs.com> References: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE3D9@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Chris, I am pinging from one VM to another over graphical console, which is successful. But when I try to run ssh or iperf, then there is no success. I am using Ubuntu 16.04 Image, which is ssh enabled and yes the service sshd is running. I have created a flat network as well as vlan network and tried doing ssh/iperf on both, but with no success. There is no virtual router. I am suspecting the issue mentioned in the below bug https://bugs.launchpad.net/starlingx/+bug/1790514 But I have no understanding as why it is happening. Regards Anirudh Gupta (Senior Engineer) From: Winnicki, Chris Sent: 07 June 2019 03:09 To: Anirudh Gupta ; openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Anirudh, can you provide some details with respect to: 1) How are you pinging from one VM to the other (is it over the graphical console ? or namespace ?) 2) What VM image are you using? - Is the VM image enabled for SSH with password ? (assuming sshd is running) 3) Network topology 4) Are you trying to ssh from one VM to the other or from a different network segment? Is there a virtual router in the picture, etc.. Chris Winnicki chris.winnicki at windriver.com 613-963-1329 ________________________________ From: Anirudh Gupta [Anirudh.Gupta at hsc.com] Sent: Thursday, June 06, 2019 12:59 AM To: openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Hi Team, I have created All in one simplex setup using release 2018.10. I have spawned 2 VM’s on it. The ping is successful between the VM’s, but I am unable to ssh or run iperf on it. Can you please help me in resolving the issue. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Fri Jun 7 06:53:55 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Fri, 7 Jun 2019 06:53:55 +0000 Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm In-Reply-To: <5A608B8B-7E10-46C0-9256-783B389CBE0F@gmail.com> References: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE3D9@ALA-MBD.corp.ad.wrs.com> <5A608B8B-7E10-46C0-9256-783B389CBE0F@gmail.com> Message-ID: Hi Chris, I am pinging from one VM to another over graphical console, which is successful. But when I try to run ssh or iperf, then there is no success. I am using Ubuntu 16.04 Image, which is ssh enabled and yes the service sshd is running. I have created a flat network as well as vlan network and tried doing ssh/iperf on both, but with no success. There is no virtual router. I am suspecting the issue mentioned in the below bug https://bugs.launchpad.net/starlingx/+bug/1790514 But I have no understanding as why it is happening. Regards Anirudh Gupta (Senior Engineer) -----Original Message----- From: Ildiko Vancsa Sent: 07 June 2019 12:11 To: Winnicki, Chris ; Anirudh Gupta Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Removing some mailing lists that are not relevant for this discussion. Please keep on using starlingx-discuss only. Thanks, Ildikó > On 2019. Jun 6., at 23:38, Winnicki, Chris wrote: > > Anirudh, can you provide some details with respect to: > > 1) How are you pinging from one VM to the other (is it over the graphical console ? or namespace ?) > 2) What VM image are you using? - Is the VM image enabled for SSH with password ? (assuming sshd is running) > 3) Network topology > 4) Are you trying to ssh from one VM to the other or from a different network segment? Is there a virtual router in the picture, etc.. > > > Chris Winnicki > chris.winnicki at windriver.com > 613-963-1329 > From: Anirudh Gupta [Anirudh.Gupta at hsc.com] > Sent: Thursday, June 06, 2019 12:59 AM > To: openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm > > Hi Team, > > I have created All in one simplex setup using release 2018.10. > > I have spawned 2 VM’s on it. > The ping is successful between the VM’s, but I am unable to ssh or run iperf on it. > > Can you please help me in resolving the issue. > > Regards > अनिरुद्ध गुप्ता > (वरिष्ठ अभियंता) > Hughes Systique Corporation > D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 > > DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > https://ind01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.starlingx.io%2Fcgi-bin%2Fmailman%2Flistinfo%2Fstarlingx-discuss&data=02%7C01%7C%7Cbb431662dcc646ae570408d6eb131cec%7Ca65543b9ae9349b580f00b85821adc50%7C1%7C1%7C636954864651673350&sdata=mHtH83vBkcV5hxxwPOnY05AAFVfGQss1fXna0sJsG1E%3D&reserved=0 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. From Anirudh.Gupta at hsc.com Fri Jun 7 14:20:26 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Fri, 7 Jun 2019 14:20:26 +0000 Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm In-Reply-To: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE67D@ALA-MBD.corp.ad.wrs.com> References: <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE3D9@ALA-MBD.corp.ad.wrs.com>, , <7E4792BA14B1DE4BAB354DF77FE0233ABC8CE67D@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Chris Yes, I ran the command ethtool --offload eth0 rx off tx off After that the SSH/Iperf ran successfully. So, is it a bug that needs to be resolved from StarlingX side? Because everytime it needs to be run inside the guest VM, which is not practical. Also I have another query : I was trying to create IPV6 network on StarlingX. When I spawn a VM on that, the IP is visible in Horizon, but I am not getting the IP inside the VM. I have tried using Centos/Ubuntu VM's. Even manually enabling the dhcp inside the VM, IP is not being allocated to the interface. On Ubuntu 16.04, in /etc/network/interfaces file auto eth0 Iface eth0 inet6 dhcp But still IPV6 is not automatically allocated to the interface. I need to manually add the IP in it using the command ip -6 addr add eff0:eff0:eff0::a/128 dev eth0. Can you please help me in resolving both my issues. Regards Anirudh Get Outlook for Android From: Winnicki, Chris Sent: Friday, 7 June, 7:33 PM Subject: RE: [Starlingx-discuss] Unable to run ssh/iperfon StarlingX Vm To: Anirudh Gupta, openstack at lists.openstack.org, openstack-dev at lists.openstack.org, starlingx-announce at lists.starlingx.io, starlingx-discuss at lists.starlingx.io Anirudh: Have you tried the workaround mentioned in comment #7 in the bug report ? Are you able to ssh to localhost (ssh to VM itself from within the VM) Chris Winnicki chris.winnicki at windriver.com 613-963-1329 From: Anirudh Gupta [Anirudh.Gupta at hsc.com] Sent: Thursday, June 06, 2019 10:48 PM To: Winnicki, Chris; openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Hi Chris, I am pinging from one VM to another over graphical console, which is successful. But when I try to run ssh or iperf, then there is no success. I am using Ubuntu 16.04 Image, which is ssh enabled and yes the service sshd is running. I have created a flat network as well as vlan network and tried doing ssh/iperf on both, but with no success. There is no virtual router. I am suspecting the issue mentioned in the below bug https://bugs.launchpad.net/starlingx/+bug/1790514 But I have no understanding as why it is happening. Regards Anirudh Gupta (Senior Engineer) From: Winnicki, Chris Sent: 07 June 2019 03:09 To: Anirudh Gupta ; openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Anirudh, can you provide some details with respect to: 1) How are you pinging from one VM to the other (is it over the graphical console ? or namespace ?) 2) What VM image are you using? - Is the VM image enabled for SSH with password ? (assuming sshd is running) 3) Network topology 4) Are you trying to ssh from one VM to the other or from a different network segment? Is there a virtual router in the picture, etc.. Chris Winnicki chris.winnicki at windriver.com 613-963-1329 From: Anirudh Gupta [Anirudh.Gupta at hsc.com] Sent: Thursday, June 06, 2019 12:59 AM To: openstack at lists.openstack.org; openstack-dev at lists.openstack.org; starlingx-announce at lists.starlingx.io; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Unable to run ssh/iperf on StarlingX Vm Hi Team, I have created All in one simplex setup using release 2018.10. I have spawned 2 VM’s on it. The ping is successful between the VM’s, but I am unable to ssh or run iperf on it. Can you please help me in resolving the issue. Regards अनिरुद्ध गुप्ता (वरिष्ठ अभियंता) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Fri Jun 7 17:25:35 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Fri, 7 Jun 2019 17:25:35 +0000 Subject: [Starlingx-discuss] Current code will fail at the provisioning step Message-ID: A commit just merged in starlingx/config which causes an error when running the provisioning steps. Commit [1]which just merged this morning was using a method that was renamed by Commit [2] I am raising a Launchpad and will submit a fix shortly. [1] https://review.opendev.org/#/c/657383/ [2] https://review.opendev.org/#/c/658193/ Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Fri Jun 7 19:03:15 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Fri, 7 Jun 2019 19:03:15 +0000 Subject: [Starlingx-discuss] Current code will fail at the provisioning step In-Reply-To: References: Message-ID: The Launchpad for this issue has been raised. [1] The review with the fix has been uploaded [2]. It passes all the provisioning steps, unlock etc.. Once I have verified that it passes the application-apply, I will update it. Also note: a recent submission (either today or yesterday) added a new docker image. For people behind firewalls who maintain a private docker image mirror, please be sure to pull in stx-keystone-api-proxy:master-centos-stable-latest Al [1] https://bugs.launchpad.net/starlingx/+bug/1832025 [2] https://review.opendev.org/#/c/664020 From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Friday, June 07, 2019 1:26 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Current code will fail at the provisioning step A commit just merged in starlingx/config which causes an error when running the provisioning steps. Commit [1]which just merged this morning was using a method that was renamed by Commit [2] I am raising a Launchpad and will submit a fix shortly. [1] https://review.opendev.org/#/c/657383/ [2] https://review.opendev.org/#/c/658193/ Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Fri Jun 7 19:40:15 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Fri, 7 Jun 2019 19:40:15 +0000 Subject: [Starlingx-discuss] [packet-sig] Can't make today's meeting In-Reply-To: References: <586E8B730EA0DA4A9D6A80A10E486BC0A4933A@ALA-MBD.corp.ad.wrs.com> Message-ID: <91E0298A-0835-4BA8-9D32-D32D04317705@windriver.com> On Tue, Jun 4, 2019 at 1:07 PM Zvonar, Bill > wrote: Hi Curtis, +1 to making this a bi-weekly thing – starting next week? Sure, starting next week! Sounds great. So does this mean the next meeting is on the 11th or the 18th ? I was planning to invite some other folks from outside the community to the meeting and just want to confirm. Thanks; Ian From: Curtis > Sent: Tuesday, June 4, 2019 12:32 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [packet-sig] Can't make today's meeting Hi All, Unfortunately I can't make today's meeting. Please feel free to meet and discuss and I will read the notes. Perhaps we can change this meeting to run once every couple of weeks or at a non-weekly cadence of some kind. Thanks, and apologies, Curtis -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Jun 7 19:56:09 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 7 Jun 2019 15:56:09 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 132 passed ! In-Reply-To: <1dc4a4c4-8f8b-a5c3-3033-16498b694e3a@windriver.com> References: <1491386340.17.1559786930325.JavaMail.javamailuser@localhost> <1268630762.26.1559882239471.JavaMail.javamailuser@localhost> <1dc4a4c4-8f8b-a5c3-3033-16498b694e3a@windriver.com> Message-ID: We have a successful CENGN build with docker images at last! Build time stamp was 20190607T142331Z http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190607T142331Z/outputs/ Scott On 2019-06-07 10:27 a.m., Scott Little wrote: > We got passed the mockchain issue. This time we got hung up on a > change to build-helm-charts.sh.  A change was merged to that tool.  > Now extra arguments are required (e.g. --app stx-openstack --rpm > stx-openstack-helm) > > A rebuild has be launched > > Scott > > > On 2019-06-07 12:37 a.m., build.starlingx at gmail.com wrote: >> Project: STX_build_master_master >> Build #: 131 >> Status: Still Failing >> Timestamp: 20190607T013000Z >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190607T013000Z/logs >> -------------------------------------------------------------------------------- >> Parameters >> >> BUILD_CONTAINERS_DEV: false >> BUILD_CONTAINERS_STABLE: false >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Jun 7 22:02:25 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 7 Jun 2019 22:02:25 +0000 Subject: [Starlingx-discuss] New wiki home page - version 2.0 Message-ID: <9A85D2917C58154C960D95352B22818BD07624C2@fmsmsx123.amr.corp.intel.com> I have completely changed my draft of a new wiki home page, to follow the example set by https://www.openstack.org/community. Please take a look and let me know if you like it (or not). You can find the new draft StarlingX wiki page at https://wiki.openstack.org/wiki/StarlingX/Draft_new_wiki_home_page Thank you! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Mon Jun 10 11:36:18 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 10 Jun 2019 11:36:18 +0000 Subject: [Starlingx-discuss] R3 Feature Planning Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB501C64@ALA-MBD.corp.ad.wrs.com> Folks, The TSC will be reviewing and prioritizing the feature content over the next few weeks. Features candidates reviewed at the PTG can be found in this etherpad [0]. Since the features were interleaved with other PTG topics I have moved the features to its own etherpad [1]. We will this one going forward. If you have any feedback on the existing items and/or have other candidates please update the etherpad. The features will be reviewed during the weekly TSC meetings [2], Thur 10-11 EDT. Features being reviewed during a particular meeting will be added to the agenda [3] in advance. I would encourage anyone that has input on the R3 content to attend the weekly meetings. Thanks, Brent [0] https://etherpad.openstack.org/p/stx-ptg-agenda [1] https://etherpad.openstack.org/p/stx-r3-feature-candidates [2] https://zoom.us/j/342730236 [3] https://etherpad.openstack.org/p/stx-cores -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Jun 10 12:23:43 2019 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 10 Jun 2019 12:23:43 +0000 Subject: [Starlingx-discuss] New wiki home page - version 2.0 Message-ID: Bruce, Maybe “Installation guide” could be placed under “User Resources” rather than “Developer Resources”? In addition, clicking into “Installation Guide”, it says the latest release “Installation guide stx.2019.05”. I suppose stx.2019.05 would be modified to “stx.2.0”… Regards, Yong On 08/06/2019, 6:04 AM, "Jones, Bruce E" > wrote: I have completely changed my draft of a new wiki home page, to follow the example set by https://www.openstack.org/community. Please take a look and let me know if you like it (or not). You can find the new draft StarlingX wiki page at https://wiki.openstack.org/wiki/StarlingX/Draft_new_wiki_home_page Thank you! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Mon Jun 10 12:24:48 2019 From: serverascode at gmail.com (Curtis) Date: Mon, 10 Jun 2019 08:24:48 -0400 Subject: [Starlingx-discuss] [packet-sig] Can't make today's meeting In-Reply-To: <91E0298A-0835-4BA8-9D32-D32D04317705@windriver.com> References: <586E8B730EA0DA4A9D6A80A10E486BC0A4933A@ALA-MBD.corp.ad.wrs.com> <91E0298A-0835-4BA8-9D32-D32D04317705@windriver.com> Message-ID: On Fri, Jun 7, 2019 at 3:40 PM Jolliffe, Ian wrote: > > > > > > > On Tue, Jun 4, 2019 at 1:07 PM Zvonar, Bill > wrote: > > Hi Curtis, +1 to making this a bi-weekly thing – starting next week? > > > > Sure, starting next week! Sounds great. > > > > So does this mean the next meeting is on the 11th or the 18th ? I was > planning to invite some other folks from outside the community to the > meeting and just want to confirm. > Which would make more sense for your invitees? I'm flexible for these dates. Thanks, Curtis > > > Thanks; > > > > Ian > > > > > > > > *From:* Curtis > *Sent:* Tuesday, June 4, 2019 12:32 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] [packet-sig] Can't make today's meeting > > > > Hi All, > > > > Unfortunately I can't make today's meeting. Please feel free to meet and > discuss and I will read the notes. > > > > Perhaps we can change this meeting to run once every couple of weeks or at > a non-weekly cadence of some kind. > > > > Thanks, and apologies, > > Curtis > > > > -- > > Blog: serverascode.com > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Jun 10 13:14:23 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 10 Jun 2019 13:14:23 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Weekly Meeting Message-ID: Agenda for Monday June 10: 1. This is MS3 week. Focus will be on updates from the following 10 SBs and primes for each SB: SBs waiting on final commits to merge: * 2004760 Containerize the ironic service [Mingyuan] SBs with tasks not yet complete but expected to get out for review this week: * 2004649 Support for OVS as the default virtual switch [Chenje Xu] * 2004520 Containerization Integration [Bob Church] * 2004273 Kubernetes Cluster Network Configuration [Teresa Ho] * 2005312 Containerize the OpenStack Client [Stefan Dinescu] * 2002843 K8s Platform Support [Jerry Sun] * 2003908 Armada Integration [Angie Wang] SBs with exceptions to MS3: * 2003909 HELM Chart Override Generation [Gerry Kopec + Daniel Badea] * 2004764 Removal of bare metal Openstack related code [Al Bailey] * 2005358 stx.config sysinv container cleanup [Al Bailey] 2. Test team: any blocking or critical issues? 3. Open items Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3654 bytes Desc: not available URL: From Stefan.Dinescu at windriver.com Mon Jun 10 14:23:32 2019 From: Stefan.Dinescu at windriver.com (Dinescu, Stefan) Date: Mon, 10 Jun 2019 14:23:32 +0000 Subject: [Starlingx-discuss] Openstackclient will move to a container In-Reply-To: References: Message-ID: Hi all, Just a heads-up, the plan is to merge this feature this week. The 3 reviews that are to be merged are [1], [2] and [3]. For now, the baremetal clients will remain installed, so if you encounter any issues with the containerized clients, you can workaround those by using the "platform-openstack" alias. Make sure you update your workflow to take into account this change. As always, if you have any questions, feel free to ask me. Thanks, Stefan [1]: https://review.opendev.org/#/c/654423/16 [2]: https://review.opendev.org/#/c/654424/19 [3]: https://review.opendev.org/#/c/655118/11 ________________________________ From: Dinescu, Stefan Sent: Wednesday, April 24, 2019 8:58 PM To: starlingx-discuss at lists.starlingx.io Subject: Openstackclient will move to a container Hi everyone, As part of storyboard [0], openstackclients will move from a baremetal installation to being run inside a container. The platform openstackclient will only be able to be used for platform services (keystone, barbican). For all other services (nova, glance, cinder etc) the containerized clients must be used. To ensure a smooth transition, the submitted code will include a wrapper so that openstack commands will function as normal. The "openstack" command is aliased to this wrapper and will only be able to be used for the container services. The clients pod will be configured automatically with the correct "clouds.yaml" auth file, so no extra steps are needed to configure the pod. In order to use the platform openstack command, another alias is provided for it: "platform-openstack". You can also access the platform openstack by using the full path of the executable: "/usr/bin/openstack" For the first batch of commits, the platform clients will not be removed, but they are expected to be removed in the following weeks, so please update any automation scripts you might have for this new behavior. If you have any questions regarding this feature/change, feel free to ask me. Thanks, Stefan [0] Storyboard: https://storyboard.openstack.org/#!/story/2005312 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Mon Jun 10 15:11:18 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 10 Jun 2019 10:11:18 -0500 Subject: [Starlingx-discuss] [Multi-OS] Meeting notes 6/10/19 Message-ID: Multi-OS team meeting Summary of meeting : 6/10/19 - Open - Multi-OS support (Debian): - Marcela to send Debian base build scripts: DONE - https://review.opendev.org/#/q/status:merged+project:starlingx/fault+branch:master+topic:multios - Al ask if we are going to have a Zuul job for review break on the control and rules - Meeting with stakeholders for feedback - Multi-OS support (open-SUSE): - There will be a war room tomorrow - We are 18 of 88 spec files, but there are many that are not necessary to migrate to open-SUSE - https://build.opensuse.org/project/show/Cloud:StarlingX:2.0 - The testing phase of current effort to build the block services in open Suse - Test integrity of the RPM against current centos - POC https://lvc.github.io/pkgdiff/ - Clean spec files, enable warnings and errors in the rpmlint (part of OBS) - Spec cleaner script used by OpenStack that will be nice to incorporate them into zuul - The idea of creating an OBS in CENG and incorporate it into zuul as a 3rd party CI, similar to OpenSUSE and RH for the Open stack RPM packages - Multi-OS support (yocto): - Will be sharing the initial plan the first week of July -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Mon Jun 10 15:46:11 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Mon, 10 Jun 2019 15:46:11 +0000 Subject: [Starlingx-discuss] Another sanity-like issue related to host-unlock Message-ID: Another probable sanity issue has been discovered. The commit [1] that impacted provisioning also impacts the ability to do a host-unlock if stx-openstack has been applied. This is being tracked by Launchpad [2] and review [3] I am testing the fix now. Al [1] https://review.opendev.org/#/c/657383/ [2] https://bugs.launchpad.net/starlingx/+bug/1832237 [3] https://review.opendev.org/#/c/664263/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Jun 10 18:40:58 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 10 Jun 2019 18:40:58 +0000 Subject: [Starlingx-discuss] R3 Feature Planning In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB501C64@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB501C64@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BD0763AC7@fmsmsx123.amr.corp.intel.com> Brent, thank you, this is a very good list. I added a line to "continue multi-OS build preparation" under Build, and highlighted key things of interest to our stakeholders: TSN, Redfish, OpenStack Train and the hardware accelerators. There is no line item for "software update from 2.0", should that be added? brucej From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Monday, June 10, 2019 4:36 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] R3 Feature Planning Folks, The TSC will be reviewing and prioritizing the feature content over the next few weeks. Features candidates reviewed at the PTG can be found in this etherpad [0]. Since the features were interleaved with other PTG topics I have moved the features to its own etherpad [1]. We will this one going forward. If you have any feedback on the existing items and/or have other candidates please update the etherpad. The features will be reviewed during the weekly TSC meetings [2], Thur 10-11 EDT. Features being reviewed during a particular meeting will be added to the agenda [3] in advance. I would encourage anyone that has input on the R3 content to attend the weekly meetings. Thanks, Brent [0] https://etherpad.openstack.org/p/stx-ptg-agenda [1] https://etherpad.openstack.org/p/stx-r3-feature-candidates [2] https://zoom.us/j/342730236 [3] https://etherpad.openstack.org/p/stx-cores -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Mon Jun 10 19:24:22 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 10 Jun 2019 19:24:22 +0000 Subject: [Starlingx-discuss] R3 Feature Planning In-Reply-To: <9A85D2917C58154C960D95352B22818BD0763AC7@fmsmsx123.amr.corp.intel.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB501C64@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD0763AC7@fmsmsx123.amr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB502E15@ALA-MBD.corp.ad.wrs.com> Thanks Bruce. The upgrade for R2 -> R3 is at line 200 . Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Monday, June 10, 2019 2:41 PM To: Rowsell, Brent Cc: starlingx-discuss at lists.starlingx.io Subject: RE: R3 Feature Planning Brent, thank you, this is a very good list. I added a line to "continue multi-OS build preparation" under Build, and highlighted key things of interest to our stakeholders: TSN, Redfish, OpenStack Train and the hardware accelerators. There is no line item for "software update from 2.0", should that be added? brucej From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Monday, June 10, 2019 4:36 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] R3 Feature Planning Folks, The TSC will be reviewing and prioritizing the feature content over the next few weeks. Features candidates reviewed at the PTG can be found in this etherpad [0]. Since the features were interleaved with other PTG topics I have moved the features to its own etherpad [1]. We will this one going forward. If you have any feedback on the existing items and/or have other candidates please update the etherpad. The features will be reviewed during the weekly TSC meetings [2], Thur 10-11 EDT. Features being reviewed during a particular meeting will be added to the agenda [3] in advance. I would encourage anyone that has input on the R3 content to attend the weekly meetings. Thanks, Brent [0] https://etherpad.openstack.org/p/stx-ptg-agenda [1] https://etherpad.openstack.org/p/stx-r3-feature-candidates [2] https://zoom.us/j/342730236 [3] https://etherpad.openstack.org/p/stx-cores -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Jun 10 19:56:40 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 10 Jun 2019 19:56:40 +0000 Subject: [Starlingx-discuss] Distro.openstack agenda for June 11th Message-ID: <9A85D2917C58154C960D95352B22818BD0763BF3@fmsmsx123.amr.corp.intel.com> 6/11 meeting * Nova Placement changes * 661679 has merged, 662229, 662371 & 662614 pending reviewers/reviews * Once Placement lands, we will need/want to rebase our Nova branch. How often can/should we do that? * NUMA backport patch fixes merged into starlingx staging. Have the fixes been posted to Artom's review in Nova? * vCPU model spec - re-proposed by Eric - https://review.openstack.org/#/c/642030/ - is there a new spec link? * Spreadsheet review: https://docs.google.com/spreadsheets/d/1udAtEpQljV2JZVs-525UhWyx-5ePOaSSkKD1CS27ohU/edit?usp=sharing * Story review: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.openstack&project_group_id=86 * Bug review: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Mon Jun 10 22:05:37 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 10 Jun 2019 15:05:37 -0700 Subject: [Starlingx-discuss] #! /usr/bin/env usage in python files Message-ID: Folks, As you know we have been using OBS to start building openSUSE based rpms. One set of the warning / errors we have seen have to do with executable vs non-executable files based on location, premissions, and shebang contents. Sometimes it's an executable file (755) without a shebang, sometimes it's a file that contains shebang but is not executable (644) and there are other cases. We are working to submit permission fix-up and/or changes adding/removing shebang as needed Many of the scripts that do have shebang use /usr/bin/env which prevents the RPM runtime from correctly detecting the dependencies. So I would like to find out if there is any reason to not use /usr/bin/python directly or other executable as appropriate. I am aware that 'env' is used to help determine the explicit location of a binary in case it's in a different location on different OS implementations. For our proposes, currently all the OSes for the multiOS discussion have python in /usr/bin. If there is no issues, we can start submitting patches for review. Thanks Sau! From build.starlingx at gmail.com Mon Jun 10 23:51:31 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 10 Jun 2019 19:51:31 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 340 - Failure! Message-ID: <1361309303.34.1560210692706.JavaMail.javamailuser@localhost> Project: STX_DL_container_setup Build #: 340 Status: Failure Timestamp: 20190610T233414Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190610T233000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190610T233000Z DOCKER_DL_ID: jenkins-master-20190610T233000Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190610T233000Z/logs DOCKER_DL_TAG: master-20190610T233000Z-downloader-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190610T233000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Mon Jun 10 23:51:35 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 10 Jun 2019 19:51:35 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 137 - Failure! Message-ID: <870638630.37.1560210696610.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 137 Status: Failure Timestamp: 20190610T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190610T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From maria.g.perez.ibarra at intel.com Tue Jun 11 02:00:49 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 11 Jun 2019 02:00:49 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190610 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-10 (link) Status: RED ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs | 33 TCs FAIL Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 36 TCs FAIL Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs | 33 TCs FAIL Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs ============================================================================================================ Secondary controller is administratively locked https://bugs.launchpad.net/starlingx/+bug/1832269 Host unlock is failing with the error helm_override_get https://bugs.launchpad.net/starlingx/+bug/1832237 Nova unable to find valid hosts caused failure creating servers https://bugs.launchpad.net/starlingx/+bug/1832279 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-Platform Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jun 11 18:06:57 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 11 Jun 2019 18:06:57 +0000 Subject: [Starlingx-discuss] Community Call (June 12, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A4CAAA@ALA-MBD.corp.ad.wrs.com> Reminder of tomorrow's Community call - topics include... - MS-3 is upon us - call for participation in Thursday's release team meeting - bug count / resolution forecast - update from the first "First Contact" SIG last week - update on actions from previous meetings Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso= 20190612T1400 From ildiko.vancsa at gmail.com Tue Jun 11 18:44:14 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 11 Jun 2019 20:44:14 +0200 Subject: [Starlingx-discuss] Open Infrastructure Summit and PTG Edge overview and next steps Message-ID: Hi, There were a lot of interesting discussions about edge computing at the Open Infrastructure Summit[1] and PTG in Denver. Hereby I would like to use the opportunity to share overviews and some progress and next steps the community has taken since. You can find a summary of the Forum discussions here: https://superuser.openstack.org/articles/edge-and-5g-not-just-the-future-but-the-present/ Check the following blog post for a recap on the PTG sessions: https://superuser.openstack.org/articles/edge-computing-takeaways-from-the-project-teams-gathering/ The Edge Computing Group is working towards testing the minimal reference architectures for which we are putting together hardware requirements. You can catch up and chime in on the discussion on this mail thread: http://lists.openstack.org/pipermail/edge-computing/2019-June/000597.html For Ironic related conversations since the event check these threads: * http://lists.openstack.org/pipermail/edge-computing/2019-May/000582.html * http://lists.openstack.org/pipermail/edge-computing/2019-May/000588.html We are also in progress to write up an RFE for Neutron to improve segment range management for edge use cases: http://lists.openstack.org/pipermail/edge-computing/2019-May/000589.html If you have any questions or comments to any of the above topics you can respond to this thread, chime in on the mail above threads, reach out on the edge-computing mailing[2] list or join the weekly edge group calls[3]. If you would like to get involved with StarlingX you can find pointers on the website[4]. Thanks, Ildikó (IRC: ildikov on Freenode) [1] https://www.openstack.org/videos/summits/denver-2019 [2] http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing [3] https://wiki.openstack.org/wiki/Edge_Computing_Group#Meetings [4] https://www.starlingx.io/community/ From scott.little at windriver.com Tue Jun 11 19:07:53 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 11 Jun 2019 15:07:53 -0400 Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 340 - Failure! In-Reply-To: <1361309303.34.1560210692706.JavaMail.javamailuser@localhost> References: <1361309303.34.1560210692706.JavaMail.javamailuser@localhost> Message-ID: <57196867-7705-869d-fe1e-c856e4747663@windriver.com> Another transient failure.  Yum was trying to update it's repodata from vault.centos.org.  repomod.xml points to a file that no longer exists.  When restarted half an hour later, all was fine. Since adding retries and timeouts to yum.conf internal to docker build isn't cutting it.  We'll try wrapping 'docker build' itself in a retry loop. Scott On 2019-06-10 7:51 p.m., build.starlingx at gmail.com wrote: > Project: STX_DL_container_setup > Build #: 340 > Status: Failure > Timestamp: 20190610T233414Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190610T233000Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190610T233000Z > DOCKER_DL_ID: jenkins-master-20190610T233000Z-downloader > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190610T233000Z/logs > DOCKER_DL_TAG: master-20190610T233000Z-downloader-image > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190610T233000Z/logs > MY_REPO_ROOT: /localdisk/designer/jenkins/master > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Teresa.Ho at windriver.com Tue Jun 11 20:20:43 2019 From: Teresa.Ho at windriver.com (Ho, Teresa) Date: Tue, 11 Jun 2019 20:20:43 +0000 Subject: [Starlingx-discuss] Provisioning changes to host interface commands Message-ID: <918130236148D14B982C7B8BC1C06EA16717DE15@ALA-MBD.corp.ad.wrs.com> This is a heads-up that a commit https://review.opendev.org/#/c/661655/ for the story https://storyboard.openstack.org/#!/story/2004273 will soon be merged. It impacts the configuration procedure and may also impact automation. The '--networks' parameter is removed from the host-if-add and host-if-modify commands. Use interface-network-assign command to assign a platform network to a platform interface. Use interface-datawork-assign command to assign a data network to a data interface The wiki will be modified to reflect these changes. For AIO-SX, current syntax: OAM_IF=enp0s3 system host-if-modify controller-0 $OAM_IF -c platform --networks oam becomes OAM_IF=enp0s3 system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam For Standard and AIO-DX, current procedure to reconfigure the platform interfaces for after bootstrap, source /etc/platform/openrc OAM_IF=enp0s3 MGMT_IF=enp0s8 system host-if-modify controller-0 lo -c none system host-if-modify controller-0 $OAM_IF --networks oam -c platform system host-if-modify controller-0 $MGMT_IF -c platform --networks mgmt system host-if-modify controller-0 $MGMT_IF -c platform --networks cluster-host becomes source /etc/platform/openrc OAM_IF=enp0s3 MGMT_IF=enp0s8 system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6 =="lo") print $4;}') for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID} done system host-if-modify controller-0 $OAM_IF -c platform system host-if-modify controller-0 MGMT_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF cluster-host For data interface, current syntax: system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID} becomes system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} Regards, Teresa -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Tue Jun 11 20:24:00 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Tue, 11 Jun 2019 20:24:00 +0000 Subject: [Starlingx-discuss] [TSC] 6/3 minutes Message-ID: Election update (ildikov) 3 valid candidates -> we are skipping the formal vote Election officials to add a summary of the learnings from this round of elections, to make the process go easier for all See emails from Ildiko for more details starlingx/deploy repo for Ansible playbooks (mpeters-wrs) all good to move forward - repo will be owned by config team R3 feature prioritization (brent) how do we want to proceed? for prep for StarlingX - R3 Release in November - agreed Send list to ML Baseline list - Python 3, etc Test Culture Discussion (Ian) We agreed that when the R3 branch is opened, we will shift to a mode where Unit tests will need to be delivered along with new code. We will need to build out frameworks and build on prior work with DevStack - plugins are available (at various stages of completion) We discussed adding to the contributor guide to make sure that test aspects come out clearly and help reinforce the culture we are working towards. More discussions/posts to mailing list to come FYI (curtis)- Some interesting discussion @OPNFV https://wiki.opnfv.org/display/PROJ/Project+Proposals+Airship PSA - versioning proposal coming from Saul - towards a spec to follow From build.starlingx at gmail.com Tue Jun 11 20:49:15 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 11 Jun 2019 16:49:15 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_container_setup - Build # 302 - Failure! Message-ID: <1385648909.42.1560286156835.JavaMail.javamailuser@localhost> Project: STX_BUILD_container_setup Build #: 302 Status: Failure Timestamp: 20190611T204912Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190611T202431Z/logs -------------------------------------------------------------------------------- Parameters PROJECT: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190611T202431Z DOCKER_BUILD_ID: jenkins-master-20190611T202431Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190611T202431Z/logs DOCKER_BUILD_TAG: master-20190611T202431Z-builder-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190611T202431Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Tue Jun 11 20:49:19 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 11 Jun 2019 16:49:19 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 140 - Failure! Message-ID: <1559436394.45.1560286160375.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 140 Status: Failure Timestamp: 20190611T202431Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190611T202431Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From kennelson11 at gmail.com Tue Jun 11 22:57:07 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 11 Jun 2019 15:57:07 -0700 Subject: [Starlingx-discuss] Shanghai PTG Changes Message-ID: Hello All, After Denver we were able to take time to reflect on the improvements we can make now that the PTG will occur immediately following the summit for the near future. While Shanghai will have its own set of variables, it's still good to reevaluate how we allocate time for groups and how we structure the week overall. tldr; - Onboarding is moving into the PTG for this round (updates stay a part of the Summit) - You can still do regular PTG stuff (or both onboarding and regular PTG stuff) - PTG slots can be as short as 1/4 of a day - More shared space at the Shanghai venue, less dedicated space - New breakdown: 1.5 days of Forum and 3.5 days of PTG - Survey will be out in a few weeks for requesting PTG space We'll have our traditional project team meetings at the PTG in Shanghai as the default format, that won't change. However, we know many of you don't expect to have all your regulars attend the PTG in Shanghai. To combat this and still help project teams make use of the PTG in the most effective way possible we are encouraging teams that want to meet but might not have all the people they need to have technical discussions to meet anyway and instead focus on a more thorough onboarding of our Chinese contributors. Project teams could also do a combination of the two, spend an hour and a half on onboarding (or however much time you see fit) and then have your regular technical discussions after. Project Updates will still be a part of the Summit like normal, its just the onboardings that will be compacted into the PTG for Shanghai. We are making PTG days more granular as well and will have the option to request 1/4 day slots in an effort to leave less empty space in the schedule. So if you are only doing onboarding, you probably only need 1/4 to 1/2 of a day. If you are doing just your regular technical discussions and still need three days, thats fine too. The venue itself (similar to Denver) will have a few large rooms for bigger teams to meet, however, most teams will meet in shared space. For those teams meeting to have only technical discussions and for teams that have larger groups, we will try to prioritize giving them their own dedicated space. For the shared spaces, we will add to the PTGbot more clearly defined locations within the shared space so its easier to find teams meeting there. I regret to inform you that, again, projection will be a very limited commodity. Yeah.. please don't shoot the messenger. Due to using mainly shared space, projection is just something we are not able to offer. The other change I haven't already mentioned is that we are going to have the PTG start a half day early. Instead of only being 3 days like in Denver, we are going to add more time to the PTG and subtract a half day from the Forum. Basically the breakdown will be 1.5 Forum and 3.5 PTG with the Summit overlapping the first two days. I will be sending the PTG survey out to PTLs/Project Leads in a couple weeks with a few changes. -Kendall Nelson (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Jun 11 23:23:58 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 11 Jun 2019 23:23:58 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190611 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-11 (link) Status: GREEN ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs ============================================================================================================ Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jun 12 08:01:26 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 12 Jun 2019 08:01:26 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F9B283@SHSMSX104.ccr.corp.intel.com> Agenda for 6/12 meeting: 1. kernel upgrade to 3.10.0-957.12.2 status (Haitao/Shuicheng) 2. QAT upgrade test status (Ricardo/Haitao) 3. Ceph upgrade test status (Fernando/Abraham, Ovidiu/Daniel/Tingjie/Martin) 4. Influxdb upgrade (Shuicheng) 5. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'starlingx-discuss at lists.starlingx.io'; Wold, Saul; 'Rowsell, Brent' Cc: 'Carlos Cebrian'; 'Waines, Greg'; 'Zhi Zhi2 Chang'; 'Eslimi, Dariush'; Armstrong, Robert H; Jones, Bruce E; Gomez, Juan P; 'Seiler, Glenn'; Chen, Tingjie; Cobbley, David A; Badea, Daniel; Chen, Jacky; Hu, Wei W; Komiyama, Takeo Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 12, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From cindy.xie at intel.com Wed Jun 12 13:41:19 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 12 Jun 2019 13:41:19 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/12 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F9B74C@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 6/12 meeting: 1. kernel upgrade to 3.10.0-957.12.2 status (Haitao/Shuicheng) This is to addres the CVE issue in LP: https://bugs.launchpad.net/starlingx/+bug/1830487 all 3 patches merged: https://review.opendev.org/#/q/branch:master+topic:Bug/1830487 Will need to monitor sanity results from Ada's team tomorrow. 2. QAT upgrade test status (Ricardo/Haitao) Integrated QAT test passed from China; external PCIe QAT device not verified from GDC. Need remote access to GDC system for debug. Ricardo is setting up a server with embedded QAT device yesterday. Share details to Shuai and Shuicheng if Ricardo still meet issues. 4 test cases executed & passed. others still pending to run due to PCI passthrough is not working. AR: Ricardo to send access details to GDC system. AR: Shuai to send the BIOS setting for SRIOV, sent a picture to Ricardo. 3. Ceph upgrade test status (Fernando/Abraham, Ovidiu/Daniel/Tingjie/Martin) test case updated: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1145711595 Issues blocking P1 cases: - Task 30351 under Story: https://storyboard.openstack.org/#!/story/2003909, Abraham to send email to Frank Miller and query the status of this task. - https://bugs.launchpad.net/starlingx/+bug/1827936, Low priority, should pass the testing with workaround. AR: Fernando to re-run the test with workaround. - https://bugs.launchpad.net/starlingx/+bug/1827246, patch uploaded & under review. AR: re-run the test cases with WA and report back for any issue still see. Abraham/Fernando continue with P2 test cases. Regarding MDS server, Tingjie to discuss with Abraham offline in details. Abraham already sent email Tingjie: 14 LP gating for stx.storage. 5 LP owned by Tingjie, with 3 have patches uploaded and pending review. 4. Influxdb upgrade (Shuicheng) - patch uploaded and review in progress: https://review.opendev.org/#/c/664155/ https://review.opendev.org/#/c/664156/ https://review.opendev.org/#/c/664157/ https://review.opendev.org/#/c/661668/ Dev testing is running in parallel with addressing the review comments. Need to get this in before MS3. Test case steps pending from WR engineers Eric McDold for the steps. 5. Opens (all) Shuai: people who are not in "starlingx-bugs" group profile cannot assign tickets to others. You can assign bug to yourself or you can ask Yong or Cindy or Bruce or Saul to assign bugs to others. -----Original Message----- From: Xie, Cindy Sent: Wednesday, June 12, 2019 4:01 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Weekly StarlingX non-OpenStack distro meeting Agenda for 6/12 meeting: 1. kernel upgrade to 3.10.0-957.12.2 status (Haitao/Shuicheng) 2. QAT upgrade test status (Ricardo/Haitao) 3. Ceph upgrade test status (Fernando/Abraham, Ovidiu/Daniel/Tingjie/Martin) 4. Influxdb upgrade (Shuicheng) 5. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'starlingx-discuss at lists.starlingx.io'; Wold, Saul; 'Rowsell, Brent' Cc: 'Carlos Cebrian'; 'Waines, Greg'; 'Zhi Zhi2 Chang'; 'Eslimi, Dariush'; Armstrong, Robert H; Jones, Bruce E; Gomez, Juan P; 'Seiler, Glenn'; Chen, Tingjie; Cobbley, David A; Badea, Daniel; Chen, Jacky; Hu, Wei W; Komiyama, Takeo Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 12, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From gaosong_1250 at 163.com Tue Jun 11 02:54:48 2019 From: gaosong_1250 at 163.com (gao.song) Date: Tue, 11 Jun 2019 10:54:48 +0800 (CST) Subject: [Starlingx-discuss] Edge nodes In-Reply-To: References: Message-ID: <2d27f48a.55a1.16b4475a434.Coremail.gaosong_1250@163.com> Hi Adrien: The install document not covered deployment of edge cloud.You need to install the central cloud and sub cloud node using standard configuration mode. Add another subcloud using starlingX GUI to generate sub cloud configuration file, copy this file to the sub cloud controller,and do config_subcloud $ini_file。Then you get what you want. For the second question, it's just related to node number, with enough nodes, you can deploy controller\compute\storage per node. On the contrary, deploy a ALL-IN-ONE can do. 在 2019-06-06 19:31:52,"Adrien Macor" 写道: Hi, I'm not completely sure how to deploy edges sites. First I will first install my central cloud (https://docs.starlingx.io/installation_guide/current/index.html). But, then I didn't find how to install, configure, etc the edges site. By the way, I found this picture: and was wondering what is the difference between thoses edges? I understand that differents VNFs requiere differents ressources (hardware), I wanted to know where the differences are explained? Is this link right? Thanks for the help Adrianp -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-1ld0zqzi.png Type: image/png Size: 44267 bytes Desc: not available URL: From huyp at inspur.com Tue Jun 11 03:35:10 2019 From: huyp at inspur.com (huyp) Date: Tue, 11 Jun 2019 11:35:10 +0800 Subject: [Starlingx-discuss] Host localhost does not have the right image!. Message-ID: <24e34101-5494-4524-a83c-03f426c796dc@Jtjnmail201615.home.langchao.com> I download the iso with the link http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190604T144018Z/outputs/iso/bootimage.iso I deploy aio-sx mode with vm which has eight vcpu, 64G memory, 240G hard driver, 3 nic cards. When I execute “ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml” , it failed with message “Host localhost does not have the right image!.” Why is that ? 发送自 Windows 10 版邮件应用 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Tue Jun 11 06:46:25 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Tue, 11 Jun 2019 06:46:25 +0000 Subject: [Starlingx-discuss] No IPV6 ip on VM being created on StarlingX Message-ID: Hi Team, I was trying to create IPV6 network on StarlingX. When I spawn a VM on that, the IP is visible in Horizon, but I am not getting the IP inside the VM. I have tried using Centos/Ubuntu VM's. Even manually enabling the dhcp inside the VM, IP is not being allocated to the interface. On Ubuntu 16.04, in /etc/network/interfaces file auto eth0 Iface eth0 inet6 dhcp But still IPV6 is not automatically allocated to the interface. I need to manually add the IP in it using the command ip -6 addr add eff0:eff0:eff0::a/128 dev eth0. Can you please help me in resolving my issue. Regards Anirudh Gupta (Senior Engineer) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Tue Jun 11 21:32:14 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Tue, 11 Jun 2019 21:32:14 +0000 Subject: [Starlingx-discuss] QAT Validation References: <000501d51cc3$4d1f3930$e75dab90$@neusoft.com> Message-ID: Hi Zhao, I have tried both ISOs in the current WolfPass Server using the external PCIe QAT device. However I��m still hitting the same error after setting up the pci_passthorugh property in the flavor and trying to launch a VM using such flavor. [cid:image001.jpg at 01D52073.382C4190] I��m just finishing the installation of StarlingX in a server with an embedded QAT device. As soon as I finished I��ll let you know the results. Thanks in advance -Ricardo From: Perez, Ricardo O Sent: Thursday, June 6, 2019 11:00 PM To: 'zhaos at neusoft.com' Cc: Lin, Shuicheng ; Wang, Hai Tao ; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX ; Su Yang Subject: RE: QAT Validation Hi Zhao, I have just read your �Cemail. Thanks for letting me know that you guys are in a holiday (I didn��t knew) :). Let me check with the proposed image and see how it goes. I��ll let you know the results by e-mail. Thanks -Ricardo From: zhaos at neusoft.com [mailto:zhaos at neusoft.com] Sent: Thursday, June 6, 2019 6:55 PM To: Perez, Ricardo O > Cc: Lin, Shuicheng >; Wang, Hai Tao >; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX >; zhaos at neusoft.com; Su Yang > Subject: ��: QAT Validation Hi Ricardo: Because our colleagues are currently in China's Dragon Boat Festival holiday, we may not be able to participate in your meeting today. We expect to schedule an appointment next Tuesday (6/11). we are very sorry that we cannot attend your today meeting. To provide you with the operation guide, we have actually operated many times, please be sure to ensure the order of operation to perform. Second, if there are still problems, we recommend using the version (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190530T152953Z/) to try again. Above, thank you! Wish you happy everyday! -------------------------------- From��zhao.shuai Tel: 13704099430 Co.:Neusoft -----ԭʼԼ��----- ������: Perez, Ricardo O > ����ʱ��: 2019��6��7�� 5:05 �ռ���: Lin, Shuicheng; Wang, Hai Tao; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; zhaos at neusoft.com ����: QAT Validation ʱ��: 2019��6��6�������� 23:00 �� 2019��6��7�������� 0:00(UTC-06:00) �ϴ���������ī����ǣ������ס� �ص�: https://zoom.us/j/2962988538 ��Ҫ��: �� Hello guys, I��m following all your steps using the provided files and here is the status: �� ISO installation + provided helm charts �C success �� Nova overrides using provided yaml file �C failing So I would like to have a live session to show you the errors and see what is still missing from my side. P.S. You can forward this meeting to required people also. Thanks -Ricardo Ricardo Perez is inviting you to a scheduled Zoom meeting. Join Zoom Meeting https://zoom.us/j/2962988538 One tap mobile +14086380968,,2962988538# US (San Jose) +16465588656,,2962988538# US (New York) Dial by your location +1 408 638 0968 US (San Jose) +1 646 558 8656 US (New York) Meeting ID: 296 298 8538 Find your local number: https://zoom.us/u/abJfeFY5aC -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 18253 bytes Desc: image001.jpg URL: From shuaix.zhao at intel.com Wed Jun 12 01:18:14 2019 From: shuaix.zhao at intel.com (Zhao, ShuaiX) Date: Wed, 12 Jun 2019 01:18:14 +0000 Subject: [Starlingx-discuss] QAT Validation In-Reply-To: References: <000501d51cc3$4d1f3930$e75dab90$@neusoft.com> Message-ID: <95CFEE63060B1D4983C4A3B2D87E4619956B5A@shsmsx102.ccr.corp.intel.com> Hi Ricardo: Thanks for your feedback. You are right, we currently recommend using the server embedded the QAT device first. We are not sure about the device compatibility of the server plugged into the QAT PCIe card. We currently recommend our STX testing process in the most reliable hardware server environment. And we are very happy to discuss with you actively. Above, thank you! Wish you happy everyday! -------------------------------- From��Neusoft zhao.shuai Tel: 13704099430 Co.:Neusoft From: Perez, Ricardo O Sent: Wednesday, June 12, 2019 5:32 AM To: starlingx-discuss at lists.starlingx.io Cc: Lin, Shuicheng ; Wang, Hai Tao ; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX ; Su Yang ; zhaos at neusoft.com; Cabrales, Ada ; Xie, Cindy Subject: RE: QAT Validation Hi Zhao, I have tried both ISOs in the current WolfPass Server using the external PCIe QAT device. However I��m still hitting the same error after setting up the pci_passthorugh property in the flavor and trying to launch a VM using such flavor. [cid:image001.jpg at 01D52073.382C4190] I��m just finishing the installation of StarlingX in a server with an embedded QAT device. As soon as I finished I��ll let you know the results. Thanks in advance -Ricardo From: Perez, Ricardo O Sent: Thursday, June 6, 2019 11:00 PM To: 'zhaos at neusoft.com' > Cc: Lin, Shuicheng >; Wang, Hai Tao >; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX >; Su Yang > Subject: RE: QAT Validation Hi Zhao, I have just read your �Cemail. Thanks for letting me know that you guys are in a holiday (I didn��t knew) :). Let me check with the proposed image and see how it goes. I��ll let you know the results by e-mail. Thanks -Ricardo From: zhaos at neusoft.com [mailto:zhaos at neusoft.com] Sent: Thursday, June 6, 2019 6:55 PM To: Perez, Ricardo O > Cc: Lin, Shuicheng >; Wang, Hai Tao >; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX >; zhaos at neusoft.com; Su Yang > Subject: ��: QAT Validation Hi Ricardo: Because our colleagues are currently in China's Dragon Boat Festival holiday, we may not be able to participate in your meeting today. We expect to schedule an appointment next Tuesday (6/11). we are very sorry that we cannot attend your today meeting. To provide you with the operation guide, we have actually operated many times, please be sure to ensure the order of operation to perform. Second, if there are still problems, we recommend using the version (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190530T152953Z/) to try again. Above, thank you! Wish you happy everyday! -------------------------------- From��zhao.shuai Tel: 13704099430 Co.:Neusoft -----ԭʼԼ��----- ������: Perez, Ricardo O > ����ʱ��: 2019��6��7�� 5:05 �ռ���: Lin, Shuicheng; Wang, Hai Tao; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; zhaos at neusoft.com ����: QAT Validation ʱ��: 2019��6��6�������� 23:00 �� 2019��6��7�������� 0:00(UTC-06:00) �ϴ���������ī����ǣ������ס� �ص�: https://zoom.us/j/2962988538 ��Ҫ��: �� Hello guys, I��m following all your steps using the provided files and here is the status: �� ISO installation + provided helm charts �C success �� Nova overrides using provided yaml file �C failing So I would like to have a live session to show you the errors and see what is still missing from my side. P.S. You can forward this meeting to required people also. Thanks -Ricardo Ricardo Perez is inviting you to a scheduled Zoom meeting. Join Zoom Meeting https://zoom.us/j/2962988538 One tap mobile +14086380968,,2962988538# US (San Jose) +16465588656,,2962988538# US (New York) Dial by your location +1 408 638 0968 US (San Jose) +1 646 558 8656 US (New York) Meeting ID: 296 298 8538 Find your local number: https://zoom.us/u/abJfeFY5aC -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 18202 bytes Desc: image001.jpg URL: From cindy.xie at intel.com Wed Jun 12 06:26:23 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 12 Jun 2019 06:26:23 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F9A09E@SHSMSX104.ccr.corp.intel.com> Agenda for 6/12 meeting: 1. kernel upgrade to 3.10.0-957.12.2 status (Haitao/Shuicheng) 2. QAT upgrade test status (Ricardo/Haitao) 3. Ceph upgrade test status (Fernando/Abraham, Ovidiu/Daniel/Tingjie/Martin) 4. Influxdb upgrade (Shuicheng) 5. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'starlingx-discuss at lists.starlingx.io'; Wold, Saul; 'Rowsell, Brent' Cc: 'Carlos Cebrian'; 'Waines, Greg'; 'Zhi Zhi2 Chang'; 'Eslimi, Dariush'; Armstrong, Robert H; Jones, Bruce E; Gomez, Juan P; 'Seiler, Glenn'; Chen, Tingjie; Cobbley, David A; Badea, Daniel; Chen, Jacky; Hu, Wei W; Komiyama, Takeo Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 12, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Anirudh.Gupta at hsc.com Wed Jun 12 09:52:51 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Wed, 12 Jun 2019 09:52:51 +0000 Subject: [Starlingx-discuss] StarlingX 2019.05 Release Queries Message-ID: Hi Team, I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: * As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? * As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? * As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? * What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release It would be a great help, if anyone can throw some light on my queries so that I can develop a better picture to go with StarlingX in our Production Environment? Regards Anirudh Gupta (Senior Engineer) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Wed Jun 12 14:58:06 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Wed, 12 Jun 2019 14:58:06 +0000 Subject: [Starlingx-discuss] #! /usr/bin/env usage in python files In-Reply-To: References: Message-ID: I think you can start submitting reviews to fix those. Over time, almost all the python scripts we had been using get created by PBR which sets up the shebang as either /usr/bin/python or /usr/bin/python2 Some components may not even have been converted to pbr yet. For cases where we created custom python scripts using /usr/bin/env, it's likely because google/stackoverflow indicate that's the preferred way. There's no specific reason to code it that way, as far as I can remember. We also have (or had) an inconsistency in how the license files in the spec files were published. Sometimes using doc, sometimes using the license keyword. Al -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Monday, June 10, 2019 6:06 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] #! /usr/bin/env usage in python files Folks, As you know we have been using OBS to start building openSUSE based rpms. One set of the warning / errors we have seen have to do with executable vs non-executable files based on location, premissions, and shebang contents. Sometimes it's an executable file (755) without a shebang, sometimes it's a file that contains shebang but is not executable (644) and there are other cases. We are working to submit permission fix-up and/or changes adding/removing shebang as needed Many of the scripts that do have shebang use /usr/bin/env which prevents the RPM runtime from correctly detecting the dependencies. So I would like to find out if there is any reason to not use /usr/bin/python directly or other executable as appropriate. I am aware that 'env' is used to help determine the explicit location of a binary in case it's in a different location on different OS implementations. For our proposes, currently all the OSes for the multiOS discussion have python in /usr/bin. If there is no issues, we can start submitting patches for review. Thanks Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Jun 12 14:59:43 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 12 Jun 2019 14:59:43 +0000 Subject: [Starlingx-discuss] Host localhost does not have the right image!. In-Reply-To: <24e34101-5494-4524-a83c-03f426c796dc@Jtjnmail201615.home.langchao.com> References: <24e34101-5494-4524-a83c-03f426c796dc@Jtjnmail201615.home.langchao.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F9BA8A@SHSMSX104.ccr.corp.intel.com> HI, Thanks for trying the ISO from Cengen. The minimal system requirements are documented on wiki [1]: By looking into your config, it might due to the reason you only have one HD and it’s too small (it says on wiki it needs 500G HD. [1] https://docs.starlingx.io/installation_guide/latest/index.html From: huyp [mailto:huyp at inspur.com] Sent: Tuesday, June 11, 2019 11:35 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Host localhost does not have the right image!. I download the iso with the link http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190604T144018Z/outputs/iso/bootimage.iso I deploy aio-sx mode with vm which has eight vcpu, 64G memory, 240G hard driver, 3 nic cards. When I execute “ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml” , it failed with message “Host localhost does not have the right image!.” Why is that ? 发送自 Windows 10 版邮件应用 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tee.Ngo at windriver.com Wed Jun 12 15:01:14 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Wed, 12 Jun 2019 15:01:14 +0000 Subject: [Starlingx-discuss] Host localhost does not have the right image!. In-Reply-To: <24e34101-5494-4524-a83c-03f426c796dc@Jtjnmail201615.home.langchao.com> References: <24e34101-5494-4524-a83c-03f426c796dc@Jtjnmail201615.home.langchao.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A5B4D8@ALA-MBD.corp.ad.wrs.com> Hi, The default wrsroot (soon to be renamed to sysadmin) is St8rlingX*. You must have chosen a different password. To resolve this issue, just overwrite the default Ansible become password in your host override file (localhost.yml) as follows: ansible_become_pass: or specify it at the command line using the playbook -e (--extra-vars) option. This is documented in StarlingX installation procedure. The error message in the failed task, where it tries to determine if the host has a (unmistakenly) StarlingX package, is misleading. The following commit: https://review.opendev.org/664759 which is under review addresses that. Tee From: huyp [mailto:huyp at inspur.com] Sent: June-10-19 11:35 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Host localhost does not have the right image!. I download the iso with the link http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190604T144018Z/outputs/iso/bootimage.iso I deploy aio-sx mode with vm which has eight vcpu, 64G memory, 240G hard driver, 3 nic cards. When I execute “ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml” , it failed with message “Host localhost does not have the right image!.” Why is that ? 发送自 Windows 10 版邮件应用 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jun 12 15:26:07 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 12 Jun 2019 15:26:07 +0000 Subject: [Starlingx-discuss] Community Call (June 12, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A7B5C4@ALA-MBD.corp.ad.wrs.com> Notes & actions from today's Community call below. Bill... MS-3 is this week - release team meeting is Thursday - stx.2.0 active stories - stx.2.0 active stories: https://storyboard.openstack.org/#!/story/list?status=active&project_group_id=86&tags=stx.2.0 - 31 open / 16 are development related - wrsroot - Numan's team started testing today, expecting to be done by end of day today or tomorrow - Saul said the build is ready for testing now, after he rebased - Numan's team will have to restart their testing to pick up the latest build - if they get the build tonight, they'll be able to finish tomorrow - Bart asked why we need to use the rebased build - since the rebase is a minor thing, we agreed to have Numan carry on with the current build First Contact SIG - good meeting last week, next one is next week (June 20), 30 minutes before TSC call - lots of good ideas from Matt Oliver from the OpenStack FC SIG - we'll start working on prioritizing & assigning First Contact work Bugs - incomplete status: reporters need to make the complete! - chase the originators & their managers/leads - Cindy: mark invalid after a period of time, or assign back to the Creator; maybe de-prioritize? - ACTION: Bill to chase the owners - old gating bugs: - 34 that were opened before April 1 - 61 before May 1 - consider a 'too old' date, and just close anything opened before then? - Dean pointed out that sometimes things are no longer relevant due to design changes (e.g. config controller bugs since Ansible came in) - Bart & Dean suggested we should be talking about de-gating old bugs, not closing them - Bill agreed - ACTION: Bill to socialize with the owners that these should be the first to be removed from the gating list - Yong Hu & Ghada discussed - Cindy asked about the resolution forecast - Bill said he'd provide by next week's meeting - ACTION: Bill to reach out to the domain/team owners to provide incoming/outgoing data for the resolution forecast Previous Actions - bitergia - Bill following up with Thierry, the updates discussed last week are in progress - Scott/Dean provide github details to Thierry, per last week - sanity - still not boring, oscillating between red & green - big files - no update, we'll keep it in view - draft new wiki home page - check it out & provide comments to Bruce: https://wiki.openstack.org/wiki/StarlingX/Draft_new_wiki_home_page PSA from Saul re: #! /usr/bin/env usage in python files - some issues they've discovering with Multi-OS - see Saul's email to the mailing list: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/004907.html -----Original Message----- From: Zvonar, Bill Sent: Tuesday, June 11, 2019 2:07 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (June 12, 2019) Reminder of tomorrow's Community call - topics include... - MS-3 is upon us - call for participation in Thursday's release team meeting - bug count / resolution forecast - update from the first "First Contact" SIG last week - update on actions from previous meetings Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso= 20190612T1400 From dtroyer at gmail.com Wed Jun 12 16:25:56 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 12 Jun 2019 11:25:56 -0500 Subject: [Starlingx-discuss] #! /usr/bin/env usage in python files In-Reply-To: References: Message-ID: On Wed, Jun 12, 2019 at 9:59 AM Bailey, Henry Albert (Al) wrote: > For cases where we created custom python scripts using /usr/bin/env, it's likely because google/stackoverflow indicate that's the preferred way. There's no specific reason to code it that way, as far as I can remember. Outside of a distro context the biggest reason I do that (use #!/usr/bin/env) is to make using virtual environments easier. I do not think that is a large concern here, the first place I could see it coming up is running in DevStack with virtual envs enabled, which is not as much of a thing as it should be and probably doesn't work now anyway. dt -- Dean Troyer dtroyer at gmail.com From scott.little at windriver.com Wed Jun 12 17:04:12 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 12 Jun 2019 13:04:12 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 140 - Failure! In-Reply-To: <1559436394.45.1560286160375.JavaMail.javamailuser@localhost> References: <1559436394.45.1560286160375.JavaMail.javamailuser@localhost> Message-ID: <390790b8-46a8-e4a0-1939-7cc352bda800@windriver.com> Error on my part.  I have converted the CENGN build to use a single container.  It was previously using two containers.  One for running download scripts, a second to run build scripts.  I missed one reference on this first attempt. Now that it is fixed, Cengn builds are ~20 minutes faster. Scott On 2019-06-11 4:49 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_master > Build #: 140 > Status: Failure > Timestamp: 20190611T202431Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190611T202431Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 12 17:17:36 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 12 Jun 2019 17:17:36 +0000 Subject: [Starlingx-discuss] #! /usr/bin/env usage in python files In-Reply-To: References: Message-ID: <20190612171735.oolamg7ijnk4la6y@yuggoth.org> On 2019-06-12 11:25:56 -0500 (-0500), Dean Troyer wrote: [...] > Outside of a distro context the biggest reason I do that (use > #!/usr/bin/env) is to make using virtual environments easier. I do > not think that is a large concern here, the first place I could see it > coming up is running in DevStack with virtual envs enabled, which is > not as much of a thing as it should be and probably doesn't work now > anyway. And for anything being installed as a proper Python package with entrypoint wrappers, those will get appropriate shebangs for the virtualenv/venv set in them anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From maria.g.perez.ibarra at intel.com Thu Jun 13 02:58:49 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 13 Jun 2019 02:58:49 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190612 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-12 (link) Status: GREEN ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Thu Jun 13 06:29:15 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Thu, 13 Jun 2019 14:29:15 +0800 Subject: [Starlingx-discuss] About NEV SDK Message-ID: Dear all, When will StarlingX integrated with NEV SDK? Any schedule plan about this feature? Thanks a lot. Regards Fillmore -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Thu Jun 13 13:27:57 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 13 Jun 2019 13:27:57 +0000 Subject: [Starlingx-discuss] New wiki home page - version 2.0 In-Reply-To: <9A85D2917C58154C960D95352B22818BD07624C2@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD07624C2@fmsmsx123.amr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A7CA86@ALA-MBD.corp.ad.wrs.com> Hi Bruce, my 2 cents. - I like the way it's more centered around Community, and particularly newcomers - I think the page should cater to newcomers, but at the same time still be a useful dashboard for non-newcomers - removing most of the text that's on our current wiki is a good start for this, I think - visually, I like the way the OpenStack page has very little blank space in the 8 'cells' of the Contributor Resources section - I think it'd be good if we could mimic this more (there's a fair bit of whitespace on the draft page) - the " Select the way you want to contribute..." bar would be a good thing to take from the OpenStack page too - something like it, we don't have the underlying material for a few of those buttons (yet) Bill... From: Jones, Bruce E Sent: Friday, June 7, 2019 6:02 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] New wiki home page - version 2.0 I have completely changed my draft of a new wiki home page, to follow the example set by https://www.openstack.org/community.  Please take a look and let me know if you like it (or not).  You can find the new draft StarlingX wiki page at https://wiki.openstack.org/wiki/StarlingX/Draft_new_wiki_home_page Thank you!         brucej From Ghada.Khalil at windriver.com Thu Jun 13 13:39:13 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 13 Jun 2019 13:39:13 +0000 Subject: [Starlingx-discuss] OVS- Pmon Testing Results In-Reply-To: <1466AF2176E6F040BD63860D0A241BBD46D024EF@FMSMSX109.amr.corp.intel.com> References: <1466AF2176E6F040BD63860D0A241BBD46D024EF@FMSMSX109.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0C153A07E@ALA-MBD.corp.ad.wrs.com> Thanks for the results, Elio. As discussed in the networking meeting, the expected test results for the failed TCs should be updated to indicate a reboot is expected. So we can consider the pass rate of 100% :) Ghada From: Martinez Monroy, Elio [mailto:elio.martinez.monroy at intel.com] Sent: Wednesday, June 12, 2019 2:42 PM To: Khalil, Ghada; Winnicki, Chris Cc: Peters, Matt; Waheed, Numan; Cabrales, Ada Subject: OVS- Pmon Testing Results Hi guys, Sharing my results regarding OVS-Pmon OVS Pmon Results: * Testing Results STATUS # PASS 7 FAIL 4 BLOCKED 0 TOTAL 11 Overall pass rate: 63.63% [cid:image002.png at 01D521CB.DBB7C830] * DISCLAIMER ABOUT FAILURES: This is the intended behavior of the system based on the current PMON configuration. The system cannot tolerate the restart of these processes. This is a limitation of using OVS-DPDK. * Executed test plan https://docs.google.com/document/d/15Pm0oEwE1CQIncu_rCRjewxMrgKHDDrciUF8wRBweJ4/edit?usp=sharing * Execution: OVS PMON integration is required for process state detection, alarming and recovery. The ovs-vswitchd processes need to be Monitored. We need to validate that the OVS PMON works according with the new architecture. The testing was executed on a 2+2 Bare Metal configuration, using 4th June ISO , following instructions from: https://review.opendev.org/#/c/648330/ https://review.opendev.org/#/c/648367/ * Summary o 11 Test cases with no option to restart or stop service (known limitation). The rest of the instructions without problems. o Final results can be reviewed at: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit?usp=sharing * Bugs o There is no bug related caused by this testing. * Suggestions: o Feature is healthy enough according with our testing. o This feature doesn't represent any blocker for 2.0 release. [cid:image001.png at 01CF8BAC.3B4C5DD0] Martinez Monroy, Elio. QA Engineer. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 6473 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 4914 bytes Desc: image003.png URL: From ildiko.vancsa at gmail.com Thu Jun 13 14:13:03 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 13 Jun 2019 16:13:03 +0200 Subject: [Starlingx-discuss] China Mobile Edge platform evaluation presentation next Tuesday on the Edge WG call Message-ID: <4B1CAE9A-08B9-4ACD-922F-D8AAA118E3CF@gmail.com> Hi, I attended a presentation today from Qihui Zhao about China Mobile’s experience on evaluation different edge deployment models with various software components. As many of the evaluated components are part of OpenStack and/or StarlingX I invited her for next week’s Edge Computing Group call (Tuesday, June 18) to share their findings with the working group and everyone who is interested. For agenda and call details please visit this wiki: https://wiki.openstack.org/wiki/Edge_Computing_Group#Meetings Please let me know if you have any questions. Thanks and Best Regards, Ildikó From Teresa.Ho at windriver.com Thu Jun 13 20:32:30 2019 From: Teresa.Ho at windriver.com (Ho, Teresa) Date: Thu, 13 Jun 2019 20:32:30 +0000 Subject: [Starlingx-discuss] Provisioning changes to host interface commands Message-ID: <918130236148D14B982C7B8BC1C06EA16E1AE8AD@ALA-MBD.corp.ad.wrs.com> The commit has been merged. Please update your Installation workflow and/or automation scripts. Teresa From: Ho, Teresa Sent: Tuesday, June 11, 2019 4:21 PM To: starlingx-discuss at lists.starlingx.io Subject: Provisioning changes to host interface commands This is a heads-up that a commit https://review.opendev.org/#/c/661655/ for the story https://storyboard.openstack.org/#!/story/2004273 will soon be merged. It impacts the configuration procedure and may also impact automation. The '--networks' parameter is removed from the host-if-add and host-if-modify commands. Use interface-network-assign command to assign a platform network to a platform interface. Use interface-datawork-assign command to assign a data network to a data interface The wiki will be modified to reflect these changes. For AIO-SX, current syntax: OAM_IF=enp0s3 system host-if-modify controller-0 $OAM_IF -c platform --networks oam becomes OAM_IF=enp0s3 system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam For Standard and AIO-DX, current procedure to reconfigure the platform interfaces for after bootstrap, source /etc/platform/openrc OAM_IF=enp0s3 MGMT_IF=enp0s8 system host-if-modify controller-0 lo -c none system host-if-modify controller-0 $OAM_IF --networks oam -c platform system host-if-modify controller-0 $MGMT_IF -c platform --networks mgmt system host-if-modify controller-0 $MGMT_IF -c platform --networks cluster-host becomes source /etc/platform/openrc OAM_IF=enp0s3 MGMT_IF=enp0s8 system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6 =="lo") print $4;}') for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID} done system host-if-modify controller-0 $OAM_IF -c platform system host-if-modify controller-0 MGMT_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF cluster-host For data interface, current syntax: system host-if-modify -m 1500 -n data0 -d ${PHYSNET0} -c data ${COMPUTE} ${DATA0IFUUID} becomes system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} Regards, Teresa -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Thu Jun 13 20:54:33 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Thu, 13 Jun 2019 20:54:33 +0000 Subject: [Starlingx-discuss] Expected behavior in a Simplex node? Message-ID: Hi StarlingX team, Running a Simplex node configuration, if I perform the following steps: * Simplex running. * VM creation successfully. * VM running. * system lock-host controller-0 If locking the single node, it is expected that the VM goes to the "shutdown" state ? Thanks for your answers -Richo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Jun 13 20:56:13 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 13 Jun 2019 20:56:13 +0000 Subject: [Starlingx-discuss] Expected behavior in a Simplex node? In-Reply-To: References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC2547F14@ALA-MBD.corp.ad.wrs.com> Yes. Once the system recovers from the unlock the VM will be restarted Brent From: Perez, Ricardo O [mailto:ricardo.o.perez at intel.com] Sent: Thursday, June 13, 2019 4:55 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Expected behavior in a Simplex node? Hi StarlingX team, Running a Simplex node configuration, if I perform the following steps: * Simplex running. * VM creation successfully. * VM running. * system lock-host controller-0 If locking the single node, it is expected that the VM goes to the "shutdown" state ? Thanks for your answers -Richo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Thu Jun 13 21:44:52 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Thu, 13 Jun 2019 21:44:52 +0000 Subject: [Starlingx-discuss] Expected behavior in a Simplex node? In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EC2547F14@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EC2547F14@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Brent, Does this will apply also for multimode ? As far as I understand, if you have, let's say a 2+2 configuration. The VM should migrate once that you lock the compute where is hosted, right ? Thanks -Richo From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, June 13, 2019 3:56 PM To: Perez, Ricardo O ; starlingx-discuss at lists.starlingx.io Subject: RE: Expected behavior in a Simplex node? Yes. Once the system recovers from the unlock the VM will be restarted Brent From: Perez, Ricardo O [mailto:ricardo.o.perez at intel.com] Sent: Thursday, June 13, 2019 4:55 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Expected behavior in a Simplex node? Hi StarlingX team, Running a Simplex node configuration, if I perform the following steps: * Simplex running. * VM creation successfully. * VM running. * system lock-host controller-0 If locking the single node, it is expected that the VM goes to the "shutdown" state ? Thanks for your answers -Richo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Jun 13 23:03:58 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 13 Jun 2019 23:03:58 +0000 Subject: [Starlingx-discuss] Expected behavior in a Simplex node? In-Reply-To: References: <2588653EBDFFA34B982FAF00F1B4844EC2547F14@ALA-MBD.corp.ad.wrs.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC2548283@ALA-MBD.corp.ad.wrs.com> Correct Brent From: Perez, Ricardo O [mailto:ricardo.o.perez at intel.com] Sent: Thursday, June 13, 2019 5:45 PM To: Rowsell, Brent ; starlingx-discuss at lists.starlingx.io Subject: RE: Expected behavior in a Simplex node? Hi Brent, Does this will apply also for multimode ? As far as I understand, if you have, let's say a 2+2 configuration. The VM should migrate once that you lock the compute where is hosted, right ? Thanks -Richo From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, June 13, 2019 3:56 PM To: Perez, Ricardo O >; starlingx-discuss at lists.starlingx.io Subject: RE: Expected behavior in a Simplex node? Yes. Once the system recovers from the unlock the VM will be restarted Brent From: Perez, Ricardo O [mailto:ricardo.o.perez at intel.com] Sent: Thursday, June 13, 2019 4:55 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Expected behavior in a Simplex node? Hi StarlingX team, Running a Simplex node configuration, if I perform the following steps: * Simplex running. * VM creation successfully. * VM running. * system lock-host controller-0 If locking the single node, it is expected that the VM goes to the "shutdown" state ? Thanks for your answers -Richo -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Fri Jun 14 01:38:45 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 14 Jun 2019 01:38:45 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-13 (link) Status: GREEN ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tee.Ngo at windriver.com Fri Jun 14 13:16:30 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Fri, 14 Jun 2019 13:16:30 +0000 Subject: [Starlingx-discuss] Host localhost does not have the right image!. In-Reply-To: References: <25f39b49b84e9d8fe70571f5bf8d27b5@sslemail.net> <80ED4CE81E3D8F4099306648E95DAFE453A5B4D8@ALA-MBD.corp.ad.wrs.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A5E1DE@ALA-MBD.corp.ad.wrs.com> (My previous response is being held for moderator’s review due to large image embedded in your msg body. So I removed the image and the previous messages in this response) The current simplex installation guide recommends 2 disks: a root disk of minimum 240G and an extra disk of minimum 50G for OSD. Please raise a Launchpad and clearly describe your hardware configuration, the steps you did that led to this failure as well as Ansible log. Did you interrupt the bootstrap in the middle of the play and replay by any chance? From: Sheldon Hu(胡玉鹏) [mailto:huyp at inspur.com] Sent: June-14-19 2:59 AM To: Ngo, Tee; starlingx-discuss at lists.starlingx.io Cc: cindy.xie at intel.com Subject: 答复: [Starlingx-discuss] Host localhost does not have the right image!. Thanks for reply There is a new problem when I execute ansible-playbook command. I have three disks , 600g for sda, 200g for sdb , 200g for sdc. What’s the advice ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Fri Jun 14 04:16:20 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Fri, 14 Jun 2019 04:16:20 +0000 Subject: [Starlingx-discuss] QAT Validation In-Reply-To: References: <000501d51cc3$4d1f3930$e75dab90$@neusoft.com> Message-ID: Hello StarlingX guys, Finally, with the help from Cindy��s Team and Neusoft Team, finally we are able to have the QAT (PCIe card version) up a and running. Here you can see a quick console output where you can see the QAT device, seen in a VM launched in a guest with the described hardware passed to the VM via pci_passtrough flavor property. controller-0:~# sudo virsh list Id Name State ----------------------------------- 5 instance-00000018 running controller-0:~# sudo virsh console 5 Connected to domain instance-00000018 Escape character is ^] login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root. qatricho2 login: cirros Password: $ lspci 00:00.0 Class 0600: 8086:1237 00:01.0 Class 0601: 8086:7000 00:01.1 Class 0101: 8086:7010 00:01.2 Class 0c03: 8086:7020 00:01.3 Class 0680: 8086:7113 00:02.0 Class 0300: 1013:00b8 00:03.0 Class 0200: 1af4:1000 00:04.0 Class 0100: 1af4:1001 00:05.0 Class 0b40: 8086:37c9 --- > This is the QAT device passed with pci_passthrough property inside a VM using CirrOS. 00:06.0 Class 0b40: 8086:37c9 --- > You can see 2 devices, because I have used 2 VFs in this case, but that is out of the scope of this e-mail. 00:07.0 Class 00ff: 1af4:1002 How do I know that 37c9 is the QAT device ? Using the following command: controller-0:~$ sudo lspci | grep Co-processor Password: 3d:00.0 Co-processor: Intel Corporation C62x Chipset QuickAssist Technology (rev 04) 3d:01.0 Co-processor: Intel Corporation Device 37c9 (rev 04) So far, this is the current status for QAT Feature testing execution Test Cases Total: 12 Passed: 9 Failed: 0 :) N/A: 1 --- > This doesn��t apply for Simplex configuration (where I��m actually running the tests) Not Executed: 1 --- > Related to Rest API (we are still working on the steps for this one) In Progress 1: Related to made use of the QATZlib library So tomorrow I��ll continue working on the QATZlib, and the rest api, however this last one is marked as priority 2. If you require further details about the QAT test progress, please go to: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=84126711 Thanks in advance -Ricardo From: Perez, Ricardo O [mailto:ricardo.o.perez at intel.com] Sent: Tuesday, June 11, 2019 4:32 PM To: starlingx-discuss at lists.starlingx.io Cc: Lin, Shuicheng ; Wang, Hai Tao ; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX ; Su Yang ; zhaos at neusoft.com; Cabrales, Ada ; Xie, Cindy Subject: Re: [Starlingx-discuss] QAT Validation Hi Zhao, I have tried both ISOs in the current WolfPass Server using the external PCIe QAT device. However I��m still hitting the same error after setting up the pci_passthorugh property in the flavor and trying to launch a VM using such flavor. [cid:image002.jpg at 01D5223B.55BE3000] I��m just finishing the installation of StarlingX in a server with an embedded QAT device. As soon as I finished I��ll let you know the results. Thanks in advance -Ricardo From: Perez, Ricardo O Sent: Thursday, June 6, 2019 11:00 PM To: 'zhaos at neusoft.com' > Cc: Lin, Shuicheng >; Wang, Hai Tao >; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX >; Su Yang > Subject: RE: QAT Validation Hi Zhao, I have just read your �Cemail. Thanks for letting me know that you guys are in a holiday (I didn��t knew) :). Let me check with the proposed image and see how it goes. I��ll let you know the results by e-mail. Thanks -Ricardo From: zhaos at neusoft.com [mailto:zhaos at neusoft.com] Sent: Thursday, June 6, 2019 6:55 PM To: Perez, Ricardo O > Cc: Lin, Shuicheng >; Wang, Hai Tao >; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX >; zhaos at neusoft.com; Su Yang > Subject: ��: QAT Validation Hi Ricardo: Because our colleagues are currently in China's Dragon Boat Festival holiday, we may not be able to participate in your meeting today. We expect to schedule an appointment next Tuesday (6/11). we are very sorry that we cannot attend your today meeting. To provide you with the operation guide, we have actually operated many times, please be sure to ensure the order of operation to perform. Second, if there are still problems, we recommend using the version (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190530T152953Z/) to try again. Above, thank you! Wish you happy everyday! -------------------------------- From��zhao.shuai Tel: 13704099430 Co.:Neusoft -----ԭʼԼ��----- ������: Perez, Ricardo O > ����ʱ��: 2019��6��7�� 5:05 �ռ���: Lin, Shuicheng; Wang, Hai Tao; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; zhaos at neusoft.com ����: QAT Validation ʱ��: 2019��6��6�������� 23:00 �� 2019��6��7�������� 0:00(UTC-06:00) �ϴ���������ī����ǣ������ס� �ص�: https://zoom.us/j/2962988538 ��Ҫ��: �� Hello guys, I��m following all your steps using the provided files and here is the status: �� ISO installation + provided helm charts �C success �� Nova overrides using provided yaml file �C failing So I would like to have a live session to show you the errors and see what is still missing from my side. P.S. You can forward this meeting to required people also. Thanks -Ricardo Ricardo Perez is inviting you to a scheduled Zoom meeting. Join Zoom Meeting https://zoom.us/j/2962988538 One tap mobile +14086380968,,2962988538# US (San Jose) +16465588656,,2962988538# US (New York) Dial by your location +1 408 638 0968 US (San Jose) +1 646 558 8656 US (New York) Meeting ID: 296 298 8538 Find your local number: https://zoom.us/u/abJfeFY5aC -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 18202 bytes Desc: image002.jpg URL: From Ghada.Khalil at windriver.com Fri Jun 14 17:19:30 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 14 Jun 2019 17:19:30 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 In-Reply-To: References: Message-ID: <151EE31B9FCCA54397A757BC674650F0C153B927@ALA-MBD.corp.ad.wrs.com> Hi Maria/sanity team, Did you notice any slow-down in the system in the last day or so? We have seen reports from various developers and testers in WR who are reporting that the system is very slow, resulting in random failures. The following launchpads were reported: https://bugs.launchpad.net/starlingx/+bug/1832852 https://bugs.launchpad.net/starlingx/+bug/1832854 Is anyone else in the community experiencing this? Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, June 13, 2019 9:39 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-13 (link) Status: GREEN ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Fri Jun 14 17:44:09 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Fri, 14 Jun 2019 17:44:09 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C153B927@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C153B927@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Ghada, On Baremetal, for the last 3 green ISOS, we didn’t noticed a particular slow down. I checked the sanity execution time and it’s practically the same between the 3 days. We also used a couple of configs to check some other stuff and the system were responding properly. However, Using latest build (20190614T013000Z), we also replicated this bug: https://bugs.launchpad.net/starlingx/+bug/1832852 Thanks & Regards, Cristopher Lemus From: "Khalil, Ghada" Date: Friday, June 14, 2019 at 12:21 PM To: "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 Hi Maria/sanity team, Did you notice any slow-down in the system in the last day or so? We have seen reports from various developers and testers in WR who are reporting that the system is very slow, resulting in random failures. The following launchpads were reported: https://bugs.launchpad.net/starlingx/+bug/1832852 https://bugs.launchpad.net/starlingx/+bug/1832854 Is anyone else in the community experiencing this? Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, June 13, 2019 9:39 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-13 (link) Status: GREEN ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard – External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Fri Jun 14 19:47:36 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 14 Jun 2019 15:47:36 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_publish - Build # 252 - Failure! Message-ID: <1320325849.50.1560541657433.JavaMail.javamailuser@localhost> Project: STX_publish Build #: 252 Status: Failure Timestamp: 20190614T194204Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190614T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190614T013000Z OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190614T013000Z/logs TIMESTAMP: 20190614T013000Z PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190614T013000Z/inputs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190614T013000Z/logs MASTER_JOB_NAME: STX_build_master_master PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190614T013000Z/outputs MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Fri Jun 14 19:53:21 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 14 Jun 2019 15:53:21 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_publish - Build # 253 - Still Failing! In-Reply-To: <151131700.48.1560541653778.JavaMail.javamailuser@localhost> References: <151131700.48.1560541653778.JavaMail.javamailuser@localhost> Message-ID: <1029762572.53.1560542002451.JavaMail.javamailuser@localhost> Project: STX_publish Build #: 253 Status: Still Failing Timestamp: 20190614T194948Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190614T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190614T013000Z OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190614T013000Z/logs TIMESTAMP: 20190614T013000Z PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190614T013000Z/inputs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190614T013000Z/logs MASTER_JOB_NAME: STX_build_master_master PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190614T013000Z/outputs MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From Tee.Ngo at windriver.com Fri Jun 14 20:05:42 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Fri, 14 Jun 2019 20:05:42 +0000 Subject: [Starlingx-discuss] Bootstrap playbook relocation Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> Hi, The playbookconfig directory in StarlingX config repo, which contains the source code of the bootstrap playbook, has been relocated to stx-ansible-playbooks repo. Tee -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Jun 14 21:01:47 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 14 Jun 2019 21:01:47 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0C153B927@ALA-MBD.corp.ad.wrs.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0C153BCF7@ALA-MBD.corp.ad.wrs.com> Thanks for the info Christopher. Some developers started noticing issues in the June 12 load. And even with an increased timeout in Ansible for https://bugs.launchpad.net/starlingx/+bug/1832852 (submitted today), there are failures in applying the openstack application. It will be interesting to see your sanity results using tomorrow’s load. In the meantime, we are trying to better understand/characterize the issue on our side. Regards, Ghada From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Friday, June 14, 2019 1:44 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 Hi Ghada, On Baremetal, for the last 3 green ISOS, we didn’t noticed a particular slow down. I checked the sanity execution time and it’s practically the same between the 3 days. We also used a couple of configs to check some other stuff and the system were responding properly. However, Using latest build (20190614T013000Z), we also replicated this bug: https://bugs.launchpad.net/starlingx/+bug/1832852 Thanks & Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Friday, June 14, 2019 at 12:21 PM To: "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 Hi Maria/sanity team, Did you notice any slow-down in the system in the last day or so? We have seen reports from various developers and testers in WR who are reporting that the system is very slow, resulting in random failures. The following launchpads were reported: https://bugs.launchpad.net/starlingx/+bug/1832852 https://bugs.launchpad.net/starlingx/+bug/1832854 Is anyone else in the community experiencing this? Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, June 13, 2019 9:39 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190613 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-13 (link) Status: GREEN ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard – External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Jun 14 21:22:18 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 14 Jun 2019 14:22:18 -0700 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> Message-ID: <12c0853c-878c-976e-aa2a-3fe63f3a7180@linux.intel.com> This seems to have caused a problem with the wrsroot -> sysadmin changes, looks like I am going to have to redo a load of work now. I wish there had been some warning that this was going to happen so I could have weighed in on the effect to the sysadmin update. I think we should have completed the wrsroot-> sysadmin change first and then this move. Sau! On 6/14/19 1:05 PM, Ngo, Tee wrote: > Hi, > The playbookconfig directory in StarlingX config repo, which contains > the source code of the bootstrap playbook, has been relocated to > stx-ansible-playbooks repo. > Tee > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From maria.g.perez.ibarra at intel.com Fri Jun 14 22:02:21 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 14 Jun 2019 22:02:21 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190614 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-14 (link) Status: RED ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 49 TCs BLOQUED Sanity-Platform 11 TCs BLOQUED ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOQUED Sanity-Platform 09 TCs BLOQUED ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOQUED Sanity-Platform 09 TCs BLOQUED ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOQUED Sanity-Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 49 TCs BLOQUED Sanity Platform 07 TCs BLOQUED ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 51 TCs BLOQUED Sanity Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs BLOQUED Sanity Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs BLOQUED Sanity Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs ============================================================================================================ ansible-playbook failed at get wait task results https://bugs.launchpad.net/starlingx/+bug/1832852 Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Jun 14 22:05:15 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 14 Jun 2019 17:05:15 -0500 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> Message-ID: On Fri, Jun 14, 2019 at 3:06 PM Ngo, Tee wrote: > The playbookconfig directory in StarlingX config repo, which contains the source code of the bootstrap playbook, has been relocated to stx-ansible-playbooks repo. Why was the git history not preserved with this move? dt -- Dean Troyer dtroyer at gmail.com From Tee.Ngo at windriver.com Fri Jun 14 22:12:27 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Fri, 14 Jun 2019 22:12:27 +0000 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: <12c0853c-878c-976e-aa2a-3fe63f3a7180@linux.intel.com> References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <12c0853c-878c-976e-aa2a-3fe63f3a7180@linux.intel.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A5E67C@ALA-MBD.corp.ad.wrs.com> Saul, The new repo and pending move was communicated in TSC meeting minutes on June 3rd. I subsequently gave StarlingX and documentation and test teams a heads up regarding the pending remote bootstrap instructions. I forgot about your sysadmin commit and consequently omitted to include you in that communication. This must be frustrating. Sorry :) If it helps, I can post a commit with sysadmin change in stx-ansible-playbooks repo and link it to your commit in root repo, make a dev build and run some tests. You will just need to rebase your commit in stx-config. Again, my apologies for the inconveniences this move has caused. Tee -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: June-14-19 5:22 PM To: starlingx-discuss at lists.starlingx.io Cc: Penney, Don Subject: Re: [Starlingx-discuss] Bootstrap playbook relocation This seems to have caused a problem with the wrsroot -> sysadmin changes, looks like I am going to have to redo a load of work now. I wish there had been some warning that this was going to happen so I could have weighed in on the effect to the sysadmin update. I think we should have completed the wrsroot-> sysadmin change first and then this move. Sau! On 6/14/19 1:05 PM, Ngo, Tee wrote: > Hi, > The playbookconfig directory in StarlingX config repo, which contains > the source code of the bootstrap playbook, has been relocated to > stx-ansible-playbooks repo. > Tee > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Fri Jun 14 22:21:29 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 14 Jun 2019 15:21:29 -0700 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: <80ED4CE81E3D8F4099306648E95DAFE453A5E67C@ALA-MBD.corp.ad.wrs.com> References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <12c0853c-878c-976e-aa2a-3fe63f3a7180@linux.intel.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E67C@ALA-MBD.corp.ad.wrs.com> Message-ID: <7afdf507-c999-3d12-11f7-be23e283c052@linux.intel.com> On 6/14/19 3:12 PM, Ngo, Tee wrote: > Saul, > > The new repo and pending move was communicated in TSC meeting minutes on June 3rd. I subsequently gave StarlingX and documentation and test teams a heads up regarding the pending remote bootstrap instructions. > I forgot about your sysadmin commit and consequently omitted to include you in that communication. This must be frustrating. Sorry :) > I knew it was coming, it was just the timing of when it was going to land and the short gerrit review timeline, it might have been better to leave it longer. > If it helps, I can post a commit with sysadmin change in stx-ansible-playbooks repo and link it to your commit in root repo, make a dev build and run some tests. You will just need to rebase your commit in stx-config. > I am working on the changes now and rebuild locally before pushing all 6 repos with rebased changes, that will happen on Monday now. Sau! > Again, my apologies for the inconveniences this move has caused. > > Tee > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: June-14-19 5:22 PM > To: starlingx-discuss at lists.starlingx.io > Cc: Penney, Don > Subject: Re: [Starlingx-discuss] Bootstrap playbook relocation > > This seems to have caused a problem with the wrsroot -> sysadmin > changes, looks like I am going to have to redo a load of work now. > > I wish there had been some warning that this was going to happen so I > could have weighed in on the effect to the sysadmin update. > > I think we should have completed the wrsroot-> sysadmin change first and > then this move. > > > Sau! > > > On 6/14/19 1:05 PM, Ngo, Tee wrote: >> Hi, >> The playbookconfig directory in StarlingX config repo, which contains >> the source code of the bootstrap playbook, has been relocated to >> stx-ansible-playbooks repo. >> Tee >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Tee.Ngo at windriver.com Fri Jun 14 22:41:05 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Fri, 14 Jun 2019 22:41:05 +0000 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> Scott helped setting up the new repo. The git history import must have been missed as part of the setup. Does it mean we need to redo the repo creation? Tee -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: June-14-19 6:05 PM To: Ngo, Tee Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Bootstrap playbook relocation On Fri, Jun 14, 2019 at 3:06 PM Ngo, Tee wrote: > The playbookconfig directory in StarlingX config repo, which contains the source code of the bootstrap playbook, has been relocated to stx-ansible-playbooks repo. Why was the git history not preserved with this move? dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Fri Jun 14 22:49:09 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 14 Jun 2019 17:49:09 -0500 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> Message-ID: On Fri, Jun 14, 2019 at 5:41 PM Ngo, Tee wrote: > Scott helped setting up the new repo. The git history import must have been missed as part of the setup. Does it mean we need to redo the repo creation? Yes, the process for preserving history in a move like this is to create a new staging repo with the history using something like [0] then import that when creating it in Gerrit. At this point I think the only way to recover the history is to re-create the Gerrit repo. Any changes to the initial repo would need to be re-created. This is well into needing help from the OpenDev/Infra teams at this point... dt [0] https://github.com/dtroyer/home/blob/master/bin/split-repo.sh -- Dean Troyer dtroyer at gmail.com From Tee.Ngo at windriver.com Fri Jun 14 23:24:17 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Fri, 14 Jun 2019 23:24:17 +0000 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A5E70C@ALA-MBD.corp.ad.wrs.com> Thanks Dean for the pointer. I will follow up with Scott on Monday and get this sorted out, with help from OpenDev/Infra team, in an orderly manner to avoid causing disruptions to imminent changes (e.g. Saul's). Tee -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: June-14-19 6:49 PM To: Ngo, Tee Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Bootstrap playbook relocation On Fri, Jun 14, 2019 at 5:41 PM Ngo, Tee wrote: > Scott helped setting up the new repo. The git history import must have been missed as part of the setup. Does it mean we need to redo the repo creation? Yes, the process for preserving history in a move like this is to create a new staging repo with the history using something like [0] then import that when creating it in Gerrit. At this point I think the only way to recover the history is to re-create the Gerrit repo. Any changes to the initial repo would need to be re-created. This is well into needing help from the OpenDev/Infra teams at this point... dt [0] https://github.com/dtroyer/home/blob/master/bin/split-repo.sh -- Dean Troyer dtroyer at gmail.com From fungi at yuggoth.org Fri Jun 14 23:43:38 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 14 Jun 2019 23:43:38 +0000 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> Message-ID: <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> On 2019-06-14 17:49:09 -0500 (-0500), Dean Troyer wrote: [...] > At this point I think the only way to recover the history is to > re-create the Gerrit repo. Any changes to the initial repo would > need to be re-created. This is well into needing help from the > OpenDev/Infra teams at this point... [...] Once you have an external, publicly cloneable repo somewhere from which I can pull the new refs, I'm happy push --force those over top of the old content. Or if you prefer, you can propose a temporary ACL change for that repo in openstack/project-config to grant some Gerrit group you're in access to do the same, and I'll be glad to review/approve it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtroyer at gmail.com Sat Jun 15 00:16:07 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 14 Jun 2019 19:16:07 -0500 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> Message-ID: On Fri, Jun 14, 2019 at 6:44 PM Jeremy Stanley wrote: > Once you have an external, publicly cloneable repo somewhere from > which I can pull the new refs, I'm happy push --force those over top > of the old content. Or if you prefer, you can propose a temporary > ACL change for that repo in openstack/project-config to grant some > Gerrit group you're in access to do the same, and I'll be glad to > review/approve it. Thanks Jeremy, I imagine we'll create it in starlingx-staging and push from there. It sounds like we'll see what Scott wants to do on Monday. There are 4 merged reviews in that project now, I know we'll have to re-create them. After a force-push will they still have anything interesting? I don't think it will be a big deal, just curious. dt -- Dean Troyer dtroyer at gmail.com From Don.Penney at windriver.com Sat Jun 15 01:05:34 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Sat, 15 Jun 2019 01:05:34 +0000 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC14EDC3D@ALA-MBD.corp.ad.wrs.com> I see the following: 2dc3228 Add .gitignore to ansible-stx-playbooks repo d3360f6 Populate stx-ansible-playbooks repo 6bef82a Initial zuul / TOX setup 4266117 Added .gitreview The second one on the list, d3360f6, was just moving the playbookconfig dir from stx-config, and population the centos_iso_image.inc and centos_pkg_dirs files for the build. I believe the first and second ones were also just copies of the files in stx-config, setting up the repo. -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, June 14, 2019 8:16 PM To: Jeremy Stanley Cc: starlingx Subject: Re: [Starlingx-discuss] Bootstrap playbook relocation On Fri, Jun 14, 2019 at 6:44 PM Jeremy Stanley wrote: > Once you have an external, publicly cloneable repo somewhere from > which I can pull the new refs, I'm happy push --force those over top > of the old content. Or if you prefer, you can propose a temporary > ACL change for that repo in openstack/project-config to grant some > Gerrit group you're in access to do the same, and I'll be glad to > review/approve it. Thanks Jeremy, I imagine we'll create it in starlingx-staging and push from there. It sounds like we'll see what Scott wants to do on Monday. There are 4 merged reviews in that project now, I know we'll have to re-create them. After a force-push will they still have anything interesting? I don't think it will be a big deal, just curious. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From fungi at yuggoth.org Sat Jun 15 13:44:52 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 15 Jun 2019 13:44:52 +0000 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> Message-ID: <20190615134452.tkenlaybrqapy4we@yuggoth.org> On 2019-06-14 19:16:07 -0500 (-0500), Dean Troyer wrote: [...] > Thanks Jeremy, I imagine we'll create it in starlingx-staging and push > from there. It sounds like we'll see what Scott wants to do on > Monday. > > There are 4 merged reviews in that project now, I know we'll have to > re-create them. After a force-push will they still have anything > interesting? I don't think it will be a big deal, just curious. If you can find a way to merge the commits for those changes into the new repository state without rebasing or anything else which might change their Git commit IDs, then Gerrit will see those changes belonging in the branch normally. Even if you can't, I recommend still merging equivalent patches in your staged version to save you the effort of having to re-propose and re-approve them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtroyer at gmail.com Sat Jun 15 19:29:20 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Sat, 15 Jun 2019 14:29:20 -0500 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> Message-ID: On Fri, Jun 14, 2019 at 7:16 PM Dean Troyer wrote: > Thanks Jeremy, I imagine we'll create it in starlingx-staging and push > from there. It sounds like we'll see what Scott wants to do on > Monday. So I went ahead and did this to see how hard it would be...the result is in [0] and the steps I followed are in [1]. > There are 4 merged reviews in that project now, I know we'll have to > re-create them. After a force-push will they still have anything > interesting? I don't think it will be a big deal, just curious. In the end only commits 426611788310 (.gitreview) and 6bef82acb56d (https://review.opendev.org/#/c/664635/, tox.ini, .zuul.yaml) were used from the existing Gerrit repo. I added one additional commit to account for changes made to files in https://review.opendev.org/665437 after being copied from starlingx/config (they only appear as adds in 665437, the actual changes are not separately visible). [0] should be usable as an import/force-push to Gerrit to reset the ansible-playbooks repo if it looks good to everyone. dt [0] https://github.com/dtroyer/ansible-playbooks-staging-1 [1] https://gist.github.com/dtroyer/5d420b65d898019467dd2c9a03c15407 -- Dean Troyer dtroyer at gmail.com From bruce.e.jones at intel.com Sun Jun 16 23:59:11 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Sun, 16 Jun 2019 23:59:11 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting cancelled this week Message-ID: <9A85D2917C58154C960D95352B22818BD076C398@fmsmsx123.amr.corp.intel.com> The weekly distro.openstack call on Tuesday morning PT is cancelled for this week. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Jun 17 13:36:47 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 17 Jun 2019 13:36:47 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Weekly Meeting Message-ID: Agenda for Monday June 17: 1. Current SB status: 48 total stx2.0 SBs, 40 merged/accepted, 8 open. Will request updates on the 4 exceptions for MS3: * 2002843 K8s Platform Support [Jerry Sun] * 2003909 HELM Chart Override Generation [Gerry Kopec + Daniel Badea] * 2004764 Removal of bare metal Openstack related code [Al Bailey] * 2005358 stx.config sysinv container cleanup [Al Bailey] 2. Higher pririoty LPs: https://bugs.launchpad.net/starlingx/+bug/1832852 ansible-playbook failed at get wait task results Edit https://bugs.launchpad.net/starlingx/+bug/1832854 AIO-SX Low-latency: Watchdog fires while installing openstack https://bugs.launchpad.net/starlingx/+bug/1830297 Low-latency worker node reboots when pods under heavy load Others..... Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3232 bytes Desc: not available URL: From Ghada.Khalil at windriver.com Mon Jun 17 14:09:30 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 17 Jun 2019 14:09:30 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - June 13/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0C153C02A@ALA-MBD.corp.ad.wrs.com> The minutes from the last release meeting is below. The plan was to declare MS-3 today pending a green sanity. However, given sanity is currently Red, we will not declare the milestone. We'll discuss next steps in the community meeting on Wednesday. Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release meeting agenda / notes June 13 2019 Status for MS3 milestone - Containers - 10 story boards open - 3 ready to be accepted; code all merged - 3 have final reviews open and expect them to merge this week - 4 Exceptions (agreed to previously) are still being worked - Code Removal Stories - API Auth: Code merged in upstream Armada; need to pick it up and continue development to migrate the stx calls to use auth. Fcst: TBD / will update next week - Multiple cinder storage tiers: Fcst: Jun 21 is at risk - Openstack Patch Elimination - NUMA-Aware feature is already merged in stx-nova - Exception: External Placement has has a few outstanding reviews, but should merge shortly. - Agreed no concerns about this going in a little later - wrsroot - WR testing found an issue which Saul has fixed. New build is in progress. then testing will continue - Still expecting this to go in shortly; likely not by EOW, but early next week. - Agreed no concerns about this going in a little later - Security-related item - Based on Bruce's email, this was analyzed and deemed as a non-issue. No further action required. - Kernerl upversion for MDS vulnerabilities - This is merged already - Feature Testing Status - Most features will be done by early July - One exception is containers, currently planned for July 12 - Stories in storyboard - Follow up with Cindy: influxDB - https://storyboard.openstack.org/#!/story/2003357 - 6/17 update: Agreed to abandon this activity. See more details in the story. - Follow up with Abraham: build process simplification - https://storyboard.openstack.org/#!/story/2003712 Conclusion: Don't believe anything is blocking the milestone declaration on Monday June 17 - pending a green sanity - Regression Testing - On track to start on June 17 From vm.rod25 at gmail.com Mon Jun 17 14:25:54 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 17 Jun 2019 09:25:54 -0500 Subject: [Starlingx-discuss] [Multi-OS ] Minutes 6/17/19 Message-ID: Multi-OS team meeting Summary of the meeting: 6/17/19 - Opens - We are working only on the build of the flocks' services for Open Suse, for now - Installation, deployment, provisioning and the other phases of the project are not on our scope right now - Open SUSE FLOCK services packaging update - Good progress, complete package that was planing for the sprint: https://build.opensuse.org/project/show/Cloud:StarlingX:2.0 - 42 ( 14 passing, 1 failing ) of 55 packaging - Working on a CI system to test the spec files and help the maintainer to detect if: - SPEC files build? - RPMs have the correct file content - more (...) -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Jun 17 15:06:43 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 17 Jun 2019 11:06:43 -0400 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC14EDC3D@ALA-MBD.corp.ad.wrs.com> References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> <6703202FD9FDFF4A8DA9ACF104AE129FC14EDC3D@ALA-MBD.corp.ad.wrs.com> Message-ID: Well, so as long as the Tee, and the authors of these reviews (if different), and any other interested parties are all aware of the forthcoming history rewrite, I won't object.  The interested community should be small for such a new repo. Scott On 2019-06-14 9:05 p.m., Penney, Don wrote: > I see the following: > 2dc3228 Add .gitignore to ansible-stx-playbooks repo > d3360f6 Populate stx-ansible-playbooks repo > 6bef82a Initial zuul / TOX setup > 4266117 Added .gitreview > > The second one on the list, d3360f6, was just moving the playbookconfig dir from stx-config, and population the centos_iso_image.inc and centos_pkg_dirs files for the build. I believe the first and second ones were also just copies of the files in stx-config, setting up the repo. > > > -----Original Message----- > From: Dean Troyer [mailto:dtroyer at gmail.com] > Sent: Friday, June 14, 2019 8:16 PM > To: Jeremy Stanley > Cc: starlingx > Subject: Re: [Starlingx-discuss] Bootstrap playbook relocation > > On Fri, Jun 14, 2019 at 6:44 PM Jeremy Stanley wrote: >> Once you have an external, publicly cloneable repo somewhere from >> which I can pull the new refs, I'm happy push --force those over top >> of the old content. Or if you prefer, you can propose a temporary >> ACL change for that repo in openstack/project-config to grant some >> Gerrit group you're in access to do the same, and I'll be glad to >> review/approve it. > Thanks Jeremy, I imagine we'll create it in starlingx-staging and push > from there. It sounds like we'll see what Scott wants to do on > Monday. > > There are 4 merged reviews in that project now, I know we'll have to > re-create them. After a force-push will they still have anything > interesting? I don't think it will be a big deal, just curious. > > dt > From scott.little at windriver.com Mon Jun 17 15:37:57 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 17 Jun 2019 11:37:57 -0400 Subject: [Starlingx-discuss] mirror.starlingx.cengn.ca reachability In-Reply-To: References: Message-ID: <4e3cf7eb-521b-187a-d088-6a6ab2fc461d@windriver.com> FYI, there is a private thread between CENGN and Pratik trying to work out their connectivity issues. Has anyone else had significant connectivity issues with CENGN? Scott On 2019-06-06 8:28 a.m., Pratik M. wrote: > Hi, > I cannot reach mirror.starlingx.cengn.ca. Checked from two ISPs in > India. But seems to be reachable from others. From below, it seems > that the site is not reachable from many ISPs/locations (Frankfurt, > Toronto etc.). > > https://tools.keycdn.com/ping (8 out of 14 fails) > and > > https://lg.he.net/ > core1.sea1.he.net, > ping 135.84.104.40 numeric count 5 > Sending 5, 16-byte ICMP Echo to 135.84.104.40, timeout 5000 msec, TTL 64 > Request timed out. > [...] > core1.fra1.he.net> ping 135.84.104.40 numeric count 5 > Sending 5, 16-byte ICMP Echo to 135.84.104.40, timeout 5000 msec, TTL 64 > Request timed out. > [...] > core1.syd1.he.net> ping 135.84.104.40 numeric count 5 > Sending 5, 16-byte ICMP Echo to 135.84.104.40, timeout 5000 msec, TTL 64 > Reply from 135.84.104.40 : bytes=16 time=216ms TTL=55 > > Here is a traceroute from a failed ping: > > $ tracert -d mirror.starlingx.cengn.ca > > Tracing route to mirror.starlingx.cengn.ca [135.84.104.40] > over a maximum of 30 hops: > > 3 120 ms 139 ms 203 ms 10.50.112.57 > 4 108 ms 118 ms 140 ms 10.61.37.33 > 5 60 ms 54 ms 94 ms 125.22.219.5 > 6 211 ms 230 ms 232 ms 182.79.152.160 > 7 229 ms 196 ms 222 ms 63.218.107.193 > 8 576 ms 785 ms 977 ms 63.218.4.234 > 9 520 ms 645 ms 549 ms 63.218.4.234 > 10 678 ms 586 ms 548 ms 209.8.108.158 > 11 499 ms 568 ms 615 ms 209.148.237.14 > 12 553 ms 582 ms 596 ms 209.148.229.230 > 13 484 ms 515 ms 550 ms 209.148.249.217 > 14 535 ms 587 ms 643 ms 209.148.251.93 > 15 * 440 ms 379 ms 209.148.251.82 > 16 389 ms 418 ms 444 ms 207.107.79.178 > 17 207.107.79.178 reports: Destination net unreachable. > > Thanks > Pratik > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Tee.Ngo at windriver.com Mon Jun 17 16:48:21 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Mon, 17 Jun 2019 16:48:21 +0000 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> <6703202FD9FDFF4A8DA9ACF104AE129FC14EDC3D@ALA-MBD.corp.ad.wrs.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A5E983@ALA-MBD.corp.ad.wrs.com> Hi Dean, Jeremy, Scott, Could you do what are necessary to have the new repo up with git history. A few developers are waiting to update/add new playbooks to this repo. Thanks! Tee -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: June-17-19 11:07 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Bootstrap playbook relocation Well, so as long as the Tee, and the authors of these reviews (if different), and any other interested parties are all aware of the forthcoming history rewrite, I won't object.  The interested community should be small for such a new repo. Scott On 2019-06-14 9:05 p.m., Penney, Don wrote: > I see the following: > 2dc3228 Add .gitignore to ansible-stx-playbooks repo > d3360f6 Populate stx-ansible-playbooks repo > 6bef82a Initial zuul / TOX setup > 4266117 Added .gitreview > > The second one on the list, d3360f6, was just moving the playbookconfig dir from stx-config, and population the centos_iso_image.inc and centos_pkg_dirs files for the build. I believe the first and second ones were also just copies of the files in stx-config, setting up the repo. > > > -----Original Message----- > From: Dean Troyer [mailto:dtroyer at gmail.com] > Sent: Friday, June 14, 2019 8:16 PM > To: Jeremy Stanley > Cc: starlingx > Subject: Re: [Starlingx-discuss] Bootstrap playbook relocation > > On Fri, Jun 14, 2019 at 6:44 PM Jeremy Stanley wrote: >> Once you have an external, publicly cloneable repo somewhere from >> which I can pull the new refs, I'm happy push --force those over top >> of the old content. Or if you prefer, you can propose a temporary >> ACL change for that repo in openstack/project-config to grant some >> Gerrit group you're in access to do the same, and I'll be glad to >> review/approve it. > Thanks Jeremy, I imagine we'll create it in starlingx-staging and push > from there. It sounds like we'll see what Scott wants to do on > Monday. > > There are 4 merged reviews in that project now, I know we'll have to > re-create them. After a force-push will they still have anything > interesting? I don't think it will be a big deal, just curious. > > dt > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Mon Jun 17 16:48:53 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 17 Jun 2019 11:48:53 -0500 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> <6703202FD9FDFF4A8DA9ACF104AE129FC14EDC3D@ALA-MBD.corp.ad.wrs.com> Message-ID: On Mon, Jun 17, 2019 at 10:15 AM Scott Little wrote: > Well, so as long as the Tee, and the authors of these reviews (if > different), and any other interested parties are all aware of the > forthcoming history rewrite, I won't object. The interested community > should be small for such a new repo. This is exactly why I didn't just ask fungi to force-push over the original repo. You guys get to make the call to a) just leave it all as-is and continue on; b) force-push my test or another similar exercise containing the original history; or c) start over from scratch. I would recommend b). The biggest concern I have about not fixing it is that I am planning to do more of these and want to set the right precedent and process for doing extractions, especially when it will be code with a year's worth of history behind to rather than a few months. Also, I feel creating a new repo rather than force-pushing one is mostly-unnecessary work. If you do choose b), please verify it to be correct. The playbookconfig directory compared exactly using diff -rw, the root dir needed a bit of work. dt -- Dean Troyer dtroyer at gmail.com From scott.little at windriver.com Mon Jun 17 16:57:23 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 17 Jun 2019 12:57:23 -0400 Subject: [Starlingx-discuss] Bootstrap playbook relocation In-Reply-To: References: <80ED4CE81E3D8F4099306648E95DAFE453A5E488@ALA-MBD.corp.ad.wrs.com> <80ED4CE81E3D8F4099306648E95DAFE453A5E6D1@ALA-MBD.corp.ad.wrs.com> <20190614234337.6kyxvh44bcxml2vi@yuggoth.org> <6703202FD9FDFF4A8DA9ACF104AE129FC14EDC3D@ALA-MBD.corp.ad.wrs.com> Message-ID: <6fb54cac-b39f-89ba-32f5-c7888295a065@windriver.com> I'm ok with option b). Please proceed if there are no other objections. Scott On 2019-06-17 12:48 p.m., Dean Troyer wrote: > On Mon, Jun 17, 2019 at 10:15 AM Scott Little > wrote: >> Well, so as long as the Tee, and the authors of these reviews (if >> different), and any other interested parties are all aware of the >> forthcoming history rewrite, I won't object. The interested community >> should be small for such a new repo. > This is exactly why I didn't just ask fungi to force-push over the > original repo. You guys get to make the call to a) just leave it all > as-is and continue on; b) force-push my test or another similar > exercise containing the original history; or c) start over from > scratch. I would recommend b). The biggest concern I have about not > fixing it is that I am planning to do more of these and want to set > the right precedent and process for doing extractions, especially when > it will be code with a year's worth of history behind to rather than a > few months. Also, I feel creating a new repo rather than > force-pushing one is mostly-unnecessary work. > > If you do choose b), please verify it to be correct. The > playbookconfig directory compared exactly using diff -rw, the root dir > needed a bit of work. > > dt > From dtroyer at gmail.com Mon Jun 17 18:02:05 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 17 Jun 2019 13:02:05 -0500 Subject: [Starlingx-discuss] [ansible-playbooks] Repo re-imported with history Message-ID: A few minutes ago we force-pushed a new version of the starlingx/ansible-playbooks repo into Gerrit to include the history of those files from starlingx/config. The content of the files at HEAD (as I write this) are the same as before but all of the relevant git history is now present and the 4 previously-existing commits have been re-written into two new ones. This means any local copies of this repo need to be re-cloned or re-synced. Scott reports that the following will do a proper re-sync for developers using the repo tool: repo sync --force-sync Of course, this always works too: # rename or remove the existing repo git clone https://opendev.org/starlingx/ansible-playbooks.git Please let us know if there are any problems, here or in IRC in #starlingx. Thanks dt -- Dean Troyer dtroyer at gmail.com From Anirudh.Gupta at hsc.com Mon Jun 17 03:00:52 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Mon, 17 Jun 2019 03:00:52 +0000 Subject: [Starlingx-discuss] StarlingX 2019.05 Release Queries In-Reply-To: References: Message-ID: Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: * As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? * As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? * As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? * What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) From: Anirudh Gupta Sent: 12 June 2019 15:23 To: starlingx-discuss at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi Team, I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: * As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? * As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? * As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? * What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release It would be a great help, if anyone can throw some light on my queries so that I can develop a better picture to go with StarlingX in our Production Environment? Regards Anirudh Gupta (Senior Engineer) Hughes Systique Corporation D-23,24 Infocity II, Sector 33, Gurugram, Haryana 122001 DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jun 17 18:08:03 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 Jun 2019 18:08:03 +0000 Subject: [Starlingx-discuss] [ansible-playbooks] Repo re-imported with history In-Reply-To: References: Message-ID: <20190617180802.4ca2ecrxwqdgwtd5@yuggoth.org> On 2019-06-17 13:02:05 -0500 (-0500), Dean Troyer wrote: [...] > Of course, this always works too: > > # rename or remove the existing repo > git clone https://opendev.org/starlingx/ansible-playbooks.git [...] I generally just do this (assuming there's nothing in my local master branch I care about): git checkout master git remote update git reset --hard origin/master -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sgw at linux.intel.com Mon Jun 17 18:21:09 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 17 Jun 2019 11:21:09 -0700 Subject: [Starlingx-discuss] Pending user name change wrsroot -> sysadmin Message-ID: Folks, As many of you know we are in the process of renaming wrsroot. These changes are been rebased to the last master and new ansible-playbooks repo. Once these changes get merged and new images build, the Doc and QA changes will be needed to get logged in correctly. The new user name is 'sysadmin' and the default password is 'sysadmin', you will still be prompted on first login to change this. The group name is sys_protected if there are any tests or documentation about the group. The now 6 reviews are here: https://review.opendev.org/#/q/topic:sysadmin+(status:open+OR+status:merged) They all depend on the build-tools change which is current W-1, so when that is released they should merge. Thanks for everyones patience and support in getting the change in. Sau! From Don.Penney at windriver.com Mon Jun 17 19:38:16 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Mon, 17 Jun 2019 19:38:16 +0000 Subject: [Starlingx-discuss] Pending user name change wrsroot -> sysadmin In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC14EE346@ALA-MBD.corp.ad.wrs.com> All updates have now merged. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Monday, June 17, 2019 2:21 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Pending user name change wrsroot -> sysadmin Folks, As many of you know we are in the process of renaming wrsroot. These changes are been rebased to the last master and new ansible-playbooks repo. Once these changes get merged and new images build, the Doc and QA changes will be needed to get logged in correctly. The new user name is 'sysadmin' and the default password is 'sysadmin', you will still be prompted on first login to change this. The group name is sys_protected if there are any tests or documentation about the group. The now 6 reviews are here: https://review.opendev.org/#/q/topic:sysadmin+(status:open+OR+status:merged) They all depend on the build-tools change which is current W-1, so when that is released they should merge. Thanks for everyones patience and support in getting the change in. Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maria.g.perez.ibarra at intel.com Mon Jun 17 21:09:30 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 17 Jun 2019 21:09:30 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190617 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-17 (link) Status: RED ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 49 TCs BLOQUED Sanity-Platform 11 TCs BLOQUED ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOQUED Sanity-Platform 09 TCs BLOQUED ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOQUED Sanity-Platform 09 TCs BLOQUED ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOQUED Sanity-Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 49 TCs BLOQUED Sanity Platform 07 TCs BLOQUED ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 51 TCs BLOQUED Sanity Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs BLOQUED Sanity Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs BLOQUED Sanity Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs ============================================================================================================ ansible-playbook failed at get wait task results https://bugs.launchpad.net/starlingx/+bug/1832852 Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Jun 17 21:29:51 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 17 Jun 2019 21:29:51 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 06/13 Message-ID: <151EE31B9FCCA54397A757BC674650F0C153C72E@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Team Meeting Agenda/Notes - Jun 13/2019 Bugs - https://bugs.launchpad.net/starlingx/+bug/1824829 , the patch has got one +2 and needs another one to get merged. Not changes since last network meeting - Ghada to follow up with core and get merged today - https://bugs.launchpad.net/starlingx/+bug/1829403 , verify the OVS-DPDK running correctly and the bug is resulted by hugepage allocation. Transfer to Austin who is familar with hugepage allocation in StarlingX. - https://bugs.launchpad.net/starlingx/+bug/1831130 , can't reproduce this bug. Still investigating. - Currently suspect an environment issue with the systems in Mexico. Elio and team continue to investigate. Feature Development - Containerized OVS - Merged: https://review.opendev.org/#/c/662195 - Got one +2: https://review.opendev.org/#/c/663629 - Finished the testing on virtual environment with Simplex & Duplex. - Expect to merge 06/13 - Cluster Network Configuration - Code cleanup update is still in code review. - One item remaining to address; code ready and testing in progress. - Expect to merge by EOW - June 14 Networking Test Status - OVS-DPDK Firewall - Testing complete - 1 bug failing 2 TCs - OVS Integration w/ PMON - Testing complete. - No issues; 4 TCs need to be updated to align with expected behavior. - Containerized OVS - Code merge will be merged a week later than expected due to additional designer testing. - Expect testing to push by a week - Fcst: Jun 21 Performance Tests for Networking - Forrest's team will start working on this initiative in 2wks (end of June) - The first step is to come up with a proposal -- including the recommended tools to use From maria.g.perez.ibarra at intel.com Mon Jun 17 21:39:13 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 17 Jun 2019 21:39:13 +0000 Subject: [Starlingx-discuss] Recall: [Containers] Sanity Test - ISO 20190617 Message-ID: Perez Ibarra, Maria G would like to recall the message, "[Starlingx-discuss] [Containers] Sanity Test - ISO 20190617". From ezpeerchen at gmail.com Tue Jun 18 03:05:13 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Tue, 18 Jun 2019 11:05:13 +0800 Subject: [Starlingx-discuss] Performance Tests for STX 1.0 Message-ID: Dear all, Where could i find the test plan or reports about performance Tests for STX 1.0? Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Tue Jun 18 03:13:22 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 18 Jun 2019 03:13:22 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190617 In-Reply-To: References: Message-ID: <8557B550001AFB46A43A0CCC314BF8516876FB1C@FMSMSX108.amr.corp.intel.com> Cannot log in to keystone 'source /etc/platform/openrc' https://bugs.launchpad.net/starlingx/+bug/1833157 Regards. Juan Carlos Alonso From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Monday, June 17, 2019 4:10 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190617 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-17 (link) Status: RED ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 49 TCs BLOQUED Sanity-Platform 11 TCs BLOQUED ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOQUED Sanity-Platform 09 TCs BLOQUED ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOQUED Sanity-Platform 09 TCs BLOQUED ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOQUED Sanity-Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 49 TCs BLOQUED Sanity Platform 07 TCs BLOQUED ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 51 TCs BLOQUED Sanity Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs BLOQUED Sanity Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs BLOQUED Sanity Platform 05 TCs BLOQUED ------------------------------ TOTAL: 61 TCs ============================================================================================================ ansible-playbook failed at get wait task results https://bugs.launchpad.net/starlingx/+bug/1832852 Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatindrax.shashi at intel.com Tue Jun 18 12:27:03 2019 From: yatindrax.shashi at intel.com (Shashi, YatindraX) Date: Tue, 18 Jun 2019 12:27:03 +0000 Subject: [Starlingx-discuss] STX 2.0 (Containerized): Configuring Data network (Provider Network) type as flat fails Message-ID: Hi All, I have installed containerized STX 2.0 system in my lab and wanted to connect data network. I used default vswitch type i.e OVS. My lab network is of type flat and I don't have easy access to the switch so I would like to use data network type as flat network. But when I see in the wiki page it talked about vlan type only. I used the command "system datanetwork-add ${PHYSNET0} flat system host-if-modify & system interface-datanetwork-assign " But I am unable to connect to datanetwork. Was unable to upload error image, but I can provide. Have anybody tried with the flat network type or can somebody tell me what could be error or how should I debug. It will be so helpful for us to deploy and test Mit freundlichen Grüßen/ with best regards, Yatindra Shashi IoT Technical Solutions Engineer Munich, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Jun 18 13:29:31 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 18 Jun 2019 13:29:31 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 6/19 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FA6F14@SHSMSX104.ccr.corp.intel.com> Agenda: - Ceph test status report (Abraham/Fernando) - QAT test status report (Ricardo) - Bug review for stx.storage and stx.distro.other (Tingjie/Ovidiu, Yong/Shuicheng/Bin) - Opens (all) Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 19, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jun 18 13:49:51 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 18 Jun 2019 13:49:51 +0000 Subject: [Starlingx-discuss] Community Call (June 19, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A7E49B@ALA-MBD.corp.ad.wrs.com> Reminder of tomorrow's Community call, topics include... - MS-3 status - bug count / resolution forecast Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso= 20190619T1400 From vm.rod25 at gmail.com Tue Jun 18 13:54:24 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 18 Jun 2019 08:54:24 -0500 Subject: [Starlingx-discuss] Performance Tests for STX 1.0 In-Reply-To: References: Message-ID: Hi Ezpeer Thanks a lot for your mail. We are working on a full plan for performance testing in incoming release The initial draft presentation is here: https://drive.google.com/open?id=1Nr12zDRXf34kpjiA0LsFLU8GIpMY8Y-H2zmiC96CD4A The base of the strategy will be base on OPNFV, as described in the presentation Thanks a lot Victor Rodriguez On Mon, Jun 17, 2019 at 10:05 PM Ezpeer Chen wrote: > > Dear all, > > Where could i find the test plan or reports about performance Tests for STX 1.0? > > Thanks a lot. > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Stefan.Dinescu at windriver.com Tue Jun 18 16:54:07 2019 From: Stefan.Dinescu at windriver.com (Dinescu, Stefan) Date: Tue, 18 Jun 2019 16:54:07 +0000 Subject: [Starlingx-discuss] Openstackclient will move to a container In-Reply-To: References: , Message-ID: Hello, Due to the below commits causing sanity to fail, we will be reverting all 3 commits. Work on containerized clients will resume once stx3.0 branch is open. Once the revert commits have been merged please rebase your code to pick up these changes so you can again openstack commands. Thanks, Stefan ________________________________ From: Dinescu, Stefan Sent: Monday, June 10, 2019 5:23 PM To: starlingx-discuss at lists.starlingx.io Subject: RE: Openstackclient will move to a container Hi all, Just a heads-up, the plan is to merge this feature this week. The 3 reviews that are to be merged are [1], [2] and [3]. For now, the baremetal clients will remain installed, so if you encounter any issues with the containerized clients, you can workaround those by using the "platform-openstack" alias. Make sure you update your workflow to take into account this change. As always, if you have any questions, feel free to ask me. Thanks, Stefan [1]: https://review.opendev.org/#/c/654423/16 [2]: https://review.opendev.org/#/c/654424/19 [3]: https://review.opendev.org/#/c/655118/11 ________________________________ From: Dinescu, Stefan Sent: Wednesday, April 24, 2019 8:58 PM To: starlingx-discuss at lists.starlingx.io Subject: Openstackclient will move to a container Hi everyone, As part of storyboard [0], openstackclients will move from a baremetal installation to being run inside a container. The platform openstackclient will only be able to be used for platform services (keystone, barbican). For all other services (nova, glance, cinder etc) the containerized clients must be used. To ensure a smooth transition, the submitted code will include a wrapper so that openstack commands will function as normal. The "openstack" command is aliased to this wrapper and will only be able to be used for the container services. The clients pod will be configured automatically with the correct "clouds.yaml" auth file, so no extra steps are needed to configure the pod. In order to use the platform openstack command, another alias is provided for it: "platform-openstack". You can also access the platform openstack by using the full path of the executable: "/usr/bin/openstack" For the first batch of commits, the platform clients will not be removed, but they are expected to be removed in the following weeks, so please update any automation scripts you might have for this new behavior. If you have any questions regarding this feature/change, feel free to ask me. Thanks, Stefan [0] Storyboard: https://storyboard.openstack.org/#!/story/2005312 -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Jun 18 20:45:41 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 18 Jun 2019 20:45:41 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190618 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-18 (link) Status: RED ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 49 TCs BLOCKED Sanity-Platform 11 TCs BLOCKED ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOCKED Sanity-Platform 09 TCs BLOCKED ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOCKED Sanity-Platform 09 TCs BLOCKED ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs | 1 TCs FAIL Sanity-OpenStack 52 TCs BLOCKED Sanity-Platform 05 TCs BLOCKED ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 49 TCs BLOCKED Sanity Platform 07 TCs BLOCKED ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 51 TCs BLOCKED Sanity Platform 05 TCs BLOCKED ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs BLOCKED Sanity Platform 05 TCs BLOCKED ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs | 1 TCs FAIL Sanity OpenStack 52 TCs BLOCKED Sanity Platform 05 TCs BLOCKED ------------------------------ TOTAL: 61 TCs ============================================================================================================ Cannot source '/etc/platform/openrc' https://bugs.launchpad.net/starlingx/+bug/1833157 Regards! Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Akshay.346 at hsc.com Tue Jun 18 07:05:55 2019 From: Akshay.346 at hsc.com (Akshay 346) Date: Tue, 18 Jun 2019 07:05:55 +0000 Subject: [Starlingx-discuss] SATRLINGX ISSUE Message-ID: Hello Team, I am trying to deploy All-in-one simplex mode of starlingX with the latest builds for the 19.05 release ( i.e. image taken from build of 17 June 19). The ansible-playbook fails at : TASK [apply-bootstrap-manifest : Applying puppet bootstrap manifest] Stating : "stderr": "cp cannot stat '/tmp/hieradata/192.168.204.3.yaml': NO such file or directory And same error for many files like '/tmp/hieradata/system.yaml', '/tmp/hieradata/secutre_system.yaml'. Please guide me how to resolve this issue. Best Regards, Akshay DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Tue Jun 18 09:10:51 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Tue, 18 Jun 2019 09:10:51 +0000 Subject: [Starlingx-discuss] StarlingX 2019.05 Release Queries Message-ID: Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: * As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? * As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? * As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? * What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatindrax.shashi at intel.com Tue Jun 18 09:21:20 2019 From: yatindrax.shashi at intel.com (Shashi, YatindraX) Date: Tue, 18 Jun 2019 09:21:20 +0000 Subject: [Starlingx-discuss] STX 2.0 (Containerized): Configuring Data network (Provider Network) type as flat fails Message-ID: Hi All, I have installed containerized STX 2.0 system in my lab and wanted to connect data network. I used default vswitch type i.e OVS. My lab network is of type flat and I don't have easy access to the switch so I would like to use data network type as flat network. But when I see in the wiki page it talked about vlan type only. I used the command "system datanetwork-add ${PHYSNET0} flat system host-if-modify & system interface-datanetwork-assign " But I am unable to connect to datanetwork. I see the error like in the attached image. Have anybody tried with the flat network type or can somebody tell me what could be error or how should I debug. It will be so helpful for us to deploy and test Mit freundlichen Grüßen/ with best regards, Yatindra Shashi Munich, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: data net.JPG Type: image/jpeg Size: 61926 bytes Desc: data net.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: data_dashboard_error.JPG Type: image/jpeg Size: 90348 bytes Desc: data_dashboard_error.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: datanet_created.JPG Type: image/jpeg Size: 64903 bytes Desc: datanet_created.JPG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: no providernet_physical_shown.JPG Type: image/jpeg Size: 101378 bytes Desc: no providernet_physical_shown.JPG URL: From cindy.xie at intel.com Tue Jun 18 09:59:53 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 18 Jun 2019 09:59:53 +0000 Subject: [Starlingx-discuss] StarlingX 2019.05 Release Queries In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FA6A34@SHSMSX104.ccr.corp.intel.com> Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug'19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Ghada Khalil ; Jones, Bruce E ; Frank Miller ; Xie, Cindy ; Arce Moreno, Abraham ; Hazzim Anaya Casas ; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: * As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? * As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? * As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? * What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Tue Jun 18 11:22:15 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Tue, 18 Jun 2019 11:22:15 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query Message-ID: Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be "none" and "ovs", using the below command * system modify -vswitch_type=ovs * system modify -vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in "degraded" state. Can't I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy Sent: 18 June 2019 15:30 To: Anirudh Gupta ; Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Ghada Khalil? ; Jones, Bruce E ; Frank Miller ; Arce Moreno, Abraham ; Hazzim Anaya Casas? ; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, * We are still on track to release our release 2 (stx.2.0) on Aug'19 * StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. * Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: * As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? * As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? * As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? * What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Jun 18 12:34:58 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 18 Jun 2019 12:34:58 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com> + Forrest, do you have good answer to Gupta about using OVS without DPDK for 2018.10 release? I understand that we have the containerized OVS option without DPDK with vswitch type to "none" in stx.2.0 today. Not sure about the behavior in 2018.10 release. From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 7:22 PM To: Xie, Cindy ; Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Ghada Khalil? ; Jones, Bruce E ; Frank Miller ; Arce Moreno, Abraham ; Hazzim Anaya Casas? ; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2018.10 without DPDK query Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be "none" and "ovs", using the below command * system modify -vswitch_type=ovs * system modify -vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in "degraded" state. Can't I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy > Sent: 18 June 2019 15:30 To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug'19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: * As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? * As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? * As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? * What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From forrest.zhao at intel.com Tue Jun 18 13:58:05 2019 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Tue, 18 Jun 2019 13:58:05 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com> Message-ID: <6345119E91D5C843A93D64F498ACFA13745233C5@SHSMSX101.ccr.corp.intel.com> If I remember correctly, 2018.10 release only support OVS-DPDK as virtual switch. Ghada may be able to double confirm that. Also I can't find any instruction in 2018.10 AIO simplex deployment guide https://docs.starlingx.io/deployment_guides/current/simplex.html to set the virtual switch to OVS. From: Xie, Cindy Sent: Tuesday, June 18, 2019 8:35 PM To: Anirudh Gupta ; Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Ghada Khalil? ; Jones, Bruce E ; Frank Miller ; Arce Moreno, Abraham ; Hazzim Anaya Casas? ; Hernandez Gonzalez, Fernando ; Zhao, Forrest Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query + Forrest, do you have good answer to Gupta about using OVS without DPDK for 2018.10 release? I understand that we have the containerized OVS option without DPDK with vswitch type to "none" in stx.2.0 today. Not sure about the behavior in 2018.10 release. From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 7:22 PM To: Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2018.10 without DPDK query Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be "none" and "ovs", using the below command * system modify -vswitch_type=ovs * system modify -vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in "degraded" state. Can't I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy > Sent: 18 June 2019 15:30 To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug'19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: * As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? * As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? * As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? * What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Tue Jun 18 18:14:20 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Tue, 18 Jun 2019 18:14:20 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query In-Reply-To: <6345119E91D5C843A93D64F498ACFA13745233C5@SHSMSX101.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com>, <6345119E91D5C843A93D64F498ACFA13745233C5@SHSMSX101.ccr.corp.intel.com> Message-ID: Hi Forrest & Team I agree that there is no such instruction mentioned in the document for 2018.10 AIO Simplex Mode for OVS. It's just that my system doesn't have DPDK support and I need to deploy 2018.10 release until Release 2.0 is released in August 2019 as per the plan. So Can I setup STARLINGX 2018.10 AIO SIMPLEX in my bare metal without DPDK support? Is there any flag that could be disabled or any workaround that can work for me? Looking Forward for your response. Regards Anirudh Gupta From: Zhao, Forrest Sent: Tuesday, 18 June, 7:28 PM Subject: RE: StarlingX 2018.10 without DPDK query To: Xie, Cindy, Anirudh Gupta, Jones, Bruce E, Winnicki, Chris, Ildiko Vancsa, Ghada Khalil?, Jones, Bruce E, Frank Miller, Arce Moreno, Abraham, Hazzim Anaya Casas?, Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io, starlingx-announce at lists.starlingx.io If I remember correctly, 2018.10 release only support OVS-DPDK as virtual switch. Ghada may be able to double confirm that. Also I can’t find any instruction in 2018.10 AIO simplex deployment guide https://docs.starlingx.io/deployment_guides/current/simplex.html to set the virtual switch to OVS. From: Xie, Cindy Sent: Tuesday, June 18, 2019 8:35 PM To: Anirudh Gupta ; Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Ghada Khalil? ; Jones, Bruce E ; Frank Miller ; Arce Moreno, Abraham ; Hazzim Anaya Casas? ; Hernandez Gonzalez, Fernando ; Zhao, Forrest Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query + Forrest, do you have good answer to Gupta about using OVS without DPDK for 2018.10 release? I understand that we have the containerized OVS option without DPDK with vswitch type to “none” in stx.2.0 today. Not sure about the behavior in 2018.10 release. From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 7:22 PM To: Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2018.10 without DPDK query Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be “none” and “ovs”, using the below command system modify –vswitch_type=ovs system modify –vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in “degraded” state. Can’t I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy > Sent: 18 June 2019 15:30 To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug’19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they’ll remain unchanged? What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tee.Ngo at windriver.com Tue Jun 18 23:52:13 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Tue, 18 Jun 2019 23:52:13 +0000 Subject: [Starlingx-discuss] SATRLINGX ISSUE In-Reply-To: References: Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A60030@ALA-MBD.corp.ad.wrs.com> Hi, These are harmless. It came out of a debug message that has long been removed. Jun. 17th build should not have this log. Can you provide the output of the following: cat /etc/build.info grep -A1 "PLAY RECAP" ansible.log Tee From: Akshay 346 [mailto:Akshay.346 at hsc.com] Sent: June-18-19 3:06 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] SATRLINGX ISSUE Hello Team, I am trying to deploy All-in-one simplex mode of starlingX with the latest builds for the 19.05 release ( i.e. image taken from build of 17 June 19). The ansible-playbook fails at : TASK [apply-bootstrap-manifest : Applying puppet bootstrap manifest] Stating : "stderr": "cp cannot stat '/tmp/hieradata/192.168.204.3.yaml': NO such file or directory And same error for many files like '/tmp/hieradata/system.yaml', '/tmp/hieradata/secutre_system.yaml'. Please guide me how to resolve this issue. Best Regards, Akshay DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Jun 19 02:06:18 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 18 Jun 2019 22:06:18 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_container_setup - Build # 313 - Failure! Message-ID: <430806770.57.1560909979200.JavaMail.javamailuser@localhost> Project: STX_BUILD_container_setup Build #: 313 Status: Failure Timestamp: 20190619T014828Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190619T013000Z/logs -------------------------------------------------------------------------------- Parameters PROJECT: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190619T013000Z DOCKER_BUILD_ID: jenkins-master-20190619T013000Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190619T013000Z/logs DOCKER_BUILD_TAG: master-20190619T013000Z-builder-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190619T013000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed Jun 19 02:06:21 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 18 Jun 2019 22:06:21 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 151 - Failure! Message-ID: <462480893.60.1560909982390.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 151 Status: Failure Timestamp: 20190619T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190619T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From chenjie.xu at intel.com Wed Jun 19 02:47:20 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 19 Jun 2019 02:47:20 +0000 Subject: [Starlingx-discuss] STX 2.0 (Containerized): Configuring Data network (Provider Network) type as flat fails In-Reply-To: References: Message-ID: Hi Shashi, You can refer the below commands to setup flat network: 1. Please make sure the interface enp61s0f0 is connected to your lab network physically. 2. source /etc/platform/openrc system host-lock controller-0 system datanetwork-add public flat system datanetwork-add internal flat system host-if-list -a controller-0 system host-if-modify -m 1500 -n data0 -d public -c data controller-0 $UUID_enp61s0f0 system host-if-modify -m 1500 -n data1 -d internal -c data controller-0 $UUID_enp61s0f1 system host-unlock controller-0 3. export OS_CLOUD=openstack_helm neutron net-create --provider:network_type=flat --provider:physical_network=public --router:external external-net neutron subnet-create external-net 192.168.1.0/24 --name external-subnet --gateway 192.168.1.1 --allocation-pool start=192.168.1.200,end=192.168.1.250 neutron net-create --provider:network_type=flat --provider:physical_network=internal net1 neutron subnet-create net1 192.168.5.0/24 --name subnet1 neutron router-create router1 neutron router-gateway-set router1 external-net neutron router-interface-add router1 subnet1 Best Regards, Xu, Chenjie From: Shashi, YatindraX [mailto:yatindrax.shashi at intel.com] Sent: Tuesday, June 18, 2019 5:21 PM To: starlingx-discuss at lists.starlingx.io Cc: Wagner, Marcel ; Bruecher, Bjoern Subject: [Starlingx-discuss] STX 2.0 (Containerized): Configuring Data network (Provider Network) type as flat fails Hi All, I have installed containerized STX 2.0 system in my lab and wanted to connect data network. I used default vswitch type i.e OVS. My lab network is of type flat and I don't have easy access to the switch so I would like to use data network type as flat network. But when I see in the wiki page it talked about vlan type only. I used the command "system datanetwork-add ${PHYSNET0} flat system host-if-modify & system interface-datanetwork-assign " But I am unable to connect to datanetwork. I see the error like in the attached image. Have anybody tried with the flat network type or can somebody tell me what could be error or how should I debug. It will be so helpful for us to deploy and test Mit freundlichen Grüßen/ with best regards, Yatindra Shashi Munich, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jun 19 13:41:54 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 19 Jun 2019 13:41:54 +0000 Subject: [Starlingx-discuss] Notes: Agenda: Weekly StarlingX non-OpenStack distro meeting, 6/19 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FA8387@SHSMSX104.ccr.corp.intel.com> Agenda & notes for 6/19 meeting: - Ceph test status report (Abraham/Fernando) Test status tracking: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1145711595 15 pass, 1 WIP, 10 blocked Issues blocking P1 cases: - Task 30351 under Story: https://storyboard.openstack.org/#!/story/2003909, Abraham to send email to Frank Miller and query the status of this task. 2 patches uploaded & under review by Daniel: - https://review.opendev.org/#/c/664982/ - https://review.opendev.org/#/c/664983/ Brent agreed to allow these 2 patches to be merged after MS2 so that it can be part of stx.2.0. Cindy to align with Frank and Ghada on the ask. - STOR_FAULT_023 still blocked? hold on until a new deployment with the fix. - QAT test status report (Ricardo) Ricard: we are able to have the QAT (PCIe card version) up a and running. test status tracking: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=84126711 9 pass, 1 WIP, 1 NA, 1 not executed next step is working on QATZip testing and RESTAPI testing. Need QATZip built in with a customized image for guest OS. Expect to get this finished by today. RESTAPI: working with Shuicheng and try to understand the specific test requirements. AR: Ricardo to contact Numan to get details test requirements about RESTAPI. - Bug review for stx.storage and stx.distro.other (Tingjie/Ovidiu, Yong/Shuicheng/Bin) Stx.distro.other: 1. https://bugs.launchpad.net/starlingx/+bug/1832854 2. https://bugs.launchpad.net/starlingx/+bug/1833157, critical and blocking sanity. suspect this is related to GDC test automation setup. Shuicheng can deploy successfully in SH lab without issue. 3. https://bugs.launchpad.net/starlingx/+bug/1829941: occassionally reproduced. 4. https://bugs.launchpad.net/starlingx/+bug/1827258 & https://bugs.launchpad.net/starlingx/+bug/1832647 may related, Bin is still wip to understand why kernel memory will run out of short. 5. https://bugs.launchpad.net/starlingx/+bug/1830971, no chance to repro on bare metal yet. Tingjie: 1. https://bugs.launchpad.net/starlingx/+bug/1827080 2. https://bugs.launchpad.net/starlingx/+bug/1827119 3. https://bugs.launchpad.net/starlingx/+bug/1831064 4. https://bugs.launchpad.net/starlingx/+bug/1831064 still pending input from submitter. One patch has been merged. 1827080 and 1827119 needs verify on latest code base. Tingjie to contact Martin Chen to verify the latest ansible setup on VE. Liang: https://bugs.launchpad.net/starlingx/+bug/1831635, also cannot reproduce it and needs info from submitter. Ovidiu: 1. https://bugs.launchpad.net/starlingx/+bug/1827514, 2. https://bugs.launchpad.net/starlingx/+bug/1829855 3. https://bugs.launchpad.net/starlingx/+bug/1830191 4. https://bugs.launchpad.net/starlingx/+bug/1829844, WIP and pending patch review. marked one bug as "invalid" already due to not reproduced in latest code. Daniel: 1. https://bugs.launchpad.net/starlingx/+bug/1830809, updated documentation as fix. 2. https://bugs.launchpad.net/starlingx/+bug/1831300, not started will do it tomorrow. - Opens (all) Abandoned the influxdb upgrade, and we have all sb finished for stx.2.0. _____________________________________________ From: Xie, Cindy Sent: Tuesday, June 18, 2019 9:30 PM To: 'starlingx-discuss at lists.starlingx.io' ; 'Rowsell, Brent' ; Wold, Saul Subject: Agenda: Weekly StarlingX non-OpenStack distro meeting, 6/19 Agenda: - Ceph test status report (Abraham/Fernando) - QAT test status report (Ricardo) - Bug review for stx.storage and stx.distro.other (Tingjie/Ovidiu, Yong/Shuicheng/Bin) - Opens (all) Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 19, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Bill.Zvonar at windriver.com Wed Jun 19 15:00:16 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 19 Jun 2019 15:00:16 +0000 Subject: [Starlingx-discuss] Community Call (June 19, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A7F3BA@ALA-MBD.corp.ad.wrs.com> Notes & actions from today's call... MS-3 Status - reviewed last Thursday, plan to declare this Monday, conditional on a green sanity - haven't had one yet, so we're still waiting on it - the issue from Sunday night has been pulled & is being deferred to 3.0 (containerized openstack client) - per Ada the more recent issue was due to some issues encountered during the build (pulling a repo from Japan, per Scott) - the new build has been restarted & Ada will start with it as soon as it's done - other items - OpenStack Services: per Frank, everything got in except the aforementioned openstack client - External Placement: per Cindy, it's in progress, they're addressing code reviewers' comments - should be in soon, maybe by end of week - ACTION: Bruce to find out when the OpenStack Helm meeting is so we can represent (the changes haven't been pushed up there yet) - Ansible Restructuring: per Dariush, basically done but some small tweaks - wrsroot - this is in (!) - good work Saul & everyone; the wrsroot changes for the docs have merged as well - we agreed to cancel the release team meeting for tomorrow (June 20), since many folks will be out Defect Trend (Bill) - https://docs.google.com/spreadsheets/d/1DZZgqrCIL6wxv51_yFBk6Lfmtf1AqPD6z7e5hEs3prU/edit?usp=sharing - definitely will need help/focus to converge by RC1 Meeting with SUSE (Bruce) - Ian & Bruce met with SUSE yesterday (Networking & Cloud Software) - SUSE is interested, wants us to help them build a case for getting involved Updates on Actions - wrsroot (if not already covered in MS-3 discussion) - done, closed - sanity, in general - still not boring - big files - Dropbox? - is there an open equivalent? - CENGN? - how much will they store for us? - Nextcloud - could use an open storage service like Nextcloud - still need to have a place to store all the stuff - Mega.nz - someone *could* use this informally, though it wouldn't be an official thing (just like any other storage service that's free or has a free tier) - ACTION: Dean & Scott talk about how we could hook up a service like Nextcloud into CENGN - Community Activity Dashboard - add github/starlingx-staging repos as an input source - in progress - these repos: jenkins, stx-packaging, tools-contrib (per Dean/Scott) - the plan is to keep git and Gerrit changes separate - a way to see which commits a contributor has done - in the requirements stage : ) - Draft New Wiki - check it out & provide comments to Bruce: https://wiki.openstack.org/wiki/StarlingX/Draft_new_wiki_home_page Changes allowable post Milestone3? - should be bug fixes only, with some exceptions for 'safe' code (benign) changes targeted to R3 - at the discretion of the Cores & TLs -----Original Message----- From: Zvonar, Bill Sent: Tuesday, June 18, 2019 9:50 AM To: starlingx-discuss at lists.starlingx.io Subject: Community Call (June 19, 2019) Reminder of tomorrow's Community call, topics include... - MS-3 status - bug count / resolution forecast Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso= 20190619T1400 From chris.friesen at windriver.com Wed Jun 19 15:36:05 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 19 Jun 2019 09:36:05 -0600 Subject: [Starlingx-discuss] timezone confusion on starlingx meetings wiki Message-ID: Hi, I just noticed that the StarlingX meetings wiki (https://wiki.openstack.org/wiki/Starlingx/Meetings) seems to have mismatching and wrong information. First, the meeting times in the chart at the top of the page doesn't match the detailed listing later on. For example, the chart says the security project meets at "6:30AM PST / 1330 UTC" but the detailed listing says it meets at "6am PDT / 1400 UTC". Similarly, the chart says the multi-OS subproject meets at "7:00AM PST / 1400 UTC" while the detailed listing says "7am PDT / 1500 UTC". Second, the offset between UTC and PDT/PST is inconsistent. PDT should be UTC-7, while PST should be UTC-8. Third, nowhere actually uses PST currently as all those locations use PDT in the summer. Does it make sense to have meeting times in PST right now? Thanks, Chris From Ghada.Khalil at windriver.com Wed Jun 19 16:15:56 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 19 Jun 2019 16:15:56 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Release meeting Message-ID: <151EE31B9FCCA54397A757BC674650F0C153D6CF@ALA-MBD.corp.ad.wrs.com> As discussed in the community meeting on 6/19, we will not hold the release meeting this week. Regards, Ghada Weekly meeting on Thursday 11AM PT / 1900 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1779 bytes Desc: not available URL: From scott.little at windriver.com Wed Jun 19 16:28:27 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 19 Jun 2019 12:28:27 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 151 - Failure! In-Reply-To: <462480893.60.1560909982390.JavaMail.javamailuser@localhost> References: <462480893.60.1560909982390.JavaMail.javamailuser@localhost> Message-ID: Created launchpad ... https://bugs.launchpad.net/starlingx/+bug/1833444 I'll have a fix shortly. In the mean time, the offending sight is back up, so I've launched a new build.  It should be ready by 1 pm EST (17:00 UTC) Scott On 2019-06-18 10:06 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_master > Build #: 151 > Status: Failure > Timestamp: 20190619T013000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190619T013000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jun 19 16:41:53 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 19 Jun 2019 12:41:53 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 151 - Failure! In-Reply-To: References: <462480893.60.1560909982390.JavaMail.javamailuser@localhost> Message-ID: The new build is available! Scott On 2019-06-19 12:28 p.m., Scott Little wrote: > Created launchpad ... > > https://bugs.launchpad.net/starlingx/+bug/1833444 > > I'll have a fix shortly. > > In the mean time, the offending sight is back up, so I've launched a > new build.  It should be ready by 1 pm EST (17:00 UTC) > > Scott > > > On 2019-06-18 10:06 p.m., build.starlingx at gmail.com wrote: >> Project: STX_build_master_master >> Build #: 151 >> Status: Failure >> Timestamp: 20190619T013000Z >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190619T013000Z/logs >> -------------------------------------------------------------------------------- >> Parameters >> >> BUILD_CONTAINERS_DEV: false >> BUILD_CONTAINERS_STABLE: false >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 19 17:21:47 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Jun 2019 17:21:47 +0000 Subject: [Starlingx-discuss] timezone confusion on starlingx meetings wiki In-Reply-To: References: Message-ID: <20190619172147.putrwh2gusggdlby@yuggoth.org> On 2019-06-19 09:36:05 -0600 (-0600), Chris Friesen wrote: > I just noticed that the StarlingX meetings wiki > (https://wiki.openstack.org/wiki/Starlingx/Meetings) seems to have > mismatching and wrong information. [...] > nowhere actually uses PST currently as all those locations use PDT in > the summer. Does it make sense to have meeting times in PST right now? If it helps, the OpenStack community long ago gave up on trying to book meetings in any TZ other than UTC, and instead provides calendar files in a popular standard format and expects meeting attendees to be responsible for conversions to their own personal local timezones. There are plenty of parts of the World where people observe no local "Summer Time" or "Daylight Savings Time" (including some places in the continental USA for that matter), and even the parts which do have something like it don't all switch at the same times of year. To make matters worse, from one year to the next, governments like to decide to change those dates on you so even trying to maintain a map of them on your own is ill-advised. For that matter, the entire Pacific coast of the USA may soon switch to "year-round DST" which means there will essentially be no more PST timezone in the USA. The only logical solution is to agree on a coordinated, universal time. Fortunately there is one. Unfortunately it's not "convenient" for a lot of people, but at least it's universally inconvenient. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mark at openstack.org Wed Jun 19 17:24:06 2019 From: mark at openstack.org (Mark Collier) Date: Wed, 19 Jun 2019 12:24:06 -0500 (CDT) Subject: [Starlingx-discuss] timezone confusion on starlingx meetings wiki In-Reply-To: <20190619172147.putrwh2gusggdlby@yuggoth.org> References: <20190619172147.putrwh2gusggdlby@yuggoth.org> Message-ID: <1560965046.07661974@emailsrvr.com> UTC: Universally Inconvenient Has a nice ring to it. On Wednesday, June 19, 2019 12:21pm, "Jeremy Stanley" said: > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > On 2019-06-19 09:36:05 -0600 (-0600), Chris Friesen wrote: > > I just noticed that the StarlingX meetings wiki > > (https://wiki.openstack.org/wiki/Starlingx/Meetings) seems to have > > mismatching and wrong information. > [...] > > nowhere actually uses PST currently as all those locations use PDT in > > the summer. Does it make sense to have meeting times in PST right now? > > If it helps, the OpenStack community long ago gave up on trying to > book meetings in any TZ other than UTC, and instead provides > calendar files in a popular standard format and expects meeting > attendees to be responsible for conversions to their own personal > local timezones. > > There are plenty of parts of the World where people observe no local > "Summer Time" or "Daylight Savings Time" (including some places in > the continental USA for that matter), and even the parts which do > have something like it don't all switch at the same times of year. > To make matters worse, from one year to the next, governments like > to decide to change those dates on you so even trying to maintain a > map of them on your own is ill-advised. For that matter, the entire > Pacific coast of the USA may soon switch to "year-round DST" which > means there will essentially be no more PST timezone in the USA. > > The only logical solution is to agree on a coordinated, universal > time. Fortunately there is one. Unfortunately it's not "convenient" > for a lot of people, but at least it's universally inconvenient. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Jun 19 17:29:27 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 19 Jun 2019 10:29:27 -0700 Subject: [Starlingx-discuss] [multios] Flock systemd services cleanup for stx-3.0 Message-ID: <3e9d8190-1ebd-4bd9-f662-8956d0ab3bab@linux.intel.com> Hi Folks, One of the things that the Multi-OS work has shown is that a number of the flock packages that use a hybrid of systemd and sysvinit. This means that the sysvini scripts are called by systemd unit files. OBS and rpmlint called out these as warnings. We will be creating a storyboard that will list the services that need to be fully converted to systemd unit files This would be a good set of tasks for community members since it will provide good interactions with the flock source and testing to ensure the serivces start up correctly. The work is also independent enough that multiple people or teams can work on each package. There are a number of resources on the web with information about converting from sysvinit scripts to systemd unit files. This can be a low priority ongoing clean-up type of activity. Sau! From Don.Penney at windriver.com Wed Jun 19 18:19:20 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 19 Jun 2019 18:19:20 +0000 Subject: [Starlingx-discuss] [multios] Flock systemd services cleanup for stx-3.0 In-Reply-To: <3e9d8190-1ebd-4bd9-f662-8956d0ab3bab@linux.intel.com> References: <3e9d8190-1ebd-4bd9-f662-8956d0ab3bab@linux.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC14F43D4@ALA-MBD.corp.ad.wrs.com> +1 from me -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Wednesday, June 19, 2019 1:29 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [multios] Flock systemd services cleanup for stx-3.0 Hi Folks, One of the things that the Multi-OS work has shown is that a number of the flock packages that use a hybrid of systemd and sysvinit. This means that the sysvini scripts are called by systemd unit files. OBS and rpmlint called out these as warnings. We will be creating a storyboard that will list the services that need to be fully converted to systemd unit files This would be a good set of tasks for community members since it will provide good interactions with the flock source and testing to ensure the serivces start up correctly. The work is also independent enough that multiple people or teams can work on each package. There are a number of resources on the web with information about converting from sysvinit scripts to systemd unit files. This can be a low priority ongoing clean-up type of activity. Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Wed Jun 19 19:22:11 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 19 Jun 2019 19:22:11 +0000 Subject: [Starlingx-discuss] timezone confusion on starlingx meetings wiki In-Reply-To: <1560965046.07661974@emailsrvr.com> References: <20190619172147.putrwh2gusggdlby@yuggoth.org> <1560965046.07661974@emailsrvr.com> Message-ID: <9A85D2917C58154C960D95352B22818BD0771EE1@fmsmsx123.amr.corp.intel.com> Both of the last two times that the US changed times between standard and daylight, the project made a conscious decision to keep meeting times the same for the North American community members. I for one am greatly appreciative of that. It helps me avoid major carnage on my calendar. But I’m wondering if we’ve been optimizing for the wrong users. I have a ton of meetings on my calendar – both internal and for the project. But I would guess that the bulk of the folks in our community only have a few internal meetings and only attend a few project meetings. So maybe we should optimize for the many and not the few? Is the “provides calendar files in a popular standard format” this [0]? I see that there are no StarlingX meetings listed there. Brucej [0] http://eavesdrop.openstack.org/irc-meetings.ical From: Mark Collier [mailto:mark at openstack.org] Sent: Wednesday, June 19, 2019 10:24 AM To: Jeremy Stanley Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] timezone confusion on starlingx meetings wiki UTC: Universally Inconvenient Has a nice ring to it. On Wednesday, June 19, 2019 12:21pm, "Jeremy Stanley" > said: > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > On 2019-06-19 09:36:05 -0600 (-0600), Chris Friesen wrote: > > I just noticed that the StarlingX meetings wiki > > (https://wiki.openstack.org/wiki/Starlingx/Meetings) seems to have > > mismatching and wrong information. > [...] > > nowhere actually uses PST currently as all those locations use PDT in > > the summer. Does it make sense to have meeting times in PST right now? > > If it helps, the OpenStack community long ago gave up on trying to > book meetings in any TZ other than UTC, and instead provides > calendar files in a popular standard format and expects meeting > attendees to be responsible for conversions to their own personal > local timezones. > > There are plenty of parts of the World where people observe no local > "Summer Time" or "Daylight Savings Time" (including some places in > the continental USA for that matter), and even the parts which do > have something like it don't all switch at the same times of year. > To make matters worse, from one year to the next, governments like > to decide to change those dates on you so even trying to maintain a > map of them on your own is ill-advised. For that matter, the entire > Pacific coast of the USA may soon switch to "year-round DST" which > means there will essentially be no more PST timezone in the USA. > > The only logical solution is to agree on a coordinated, universal > time. Fortunately there is one. Unfortunately it's not "convenient" > for a lot of people, but at least it's universally inconvenient. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Jun 19 20:24:20 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 19 Jun 2019 20:24:20 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 6/19/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B79701@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Wed Jun 19 20:26:33 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 19 Jun 2019 20:26:33 +0000 Subject: [Starlingx-discuss] [multios] Flock systemd services cleanup for stx-3.0 In-Reply-To: <3e9d8190-1ebd-4bd9-f662-8956d0ab3bab@linux.intel.com> References: <3e9d8190-1ebd-4bd9-f662-8956d0ab3bab@linux.intel.com> Message-ID: +1 Maybe this is out of scope, but I nice stretch objective would to also move away from having Service Management (SM) use OCF scripts for some of these services. We currently have 2 methods of launching services, one during the bootstrap that uses systemd, and then another that is managed by SM that invokes OCF scripts. SM is fully capable of managing services through systemd (and does for some), so converting these remaining ones would also cleanup the process management. -Matt On 2019-06-19, 1:30 PM, "Saul Wold" wrote: Hi Folks, One of the things that the Multi-OS work has shown is that a number of the flock packages that use a hybrid of systemd and sysvinit. This means that the sysvini scripts are called by systemd unit files. OBS and rpmlint called out these as warnings. We will be creating a storyboard that will list the services that need to be fully converted to systemd unit files This would be a good set of tasks for community members since it will provide good interactions with the flock source and testing to ensure the serivces start up correctly. The work is also independent enough that multiple people or teams can work on each package. There are a number of resources on the web with information about converting from sysvinit scripts to systemd unit files. This can be a low priority ongoing clean-up type of activity. Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From fungi at yuggoth.org Wed Jun 19 20:32:38 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Jun 2019 20:32:38 +0000 Subject: [Starlingx-discuss] timezone confusion on starlingx meetings wiki In-Reply-To: <9A85D2917C58154C960D95352B22818BD0771EE1@fmsmsx123.amr.corp.intel.com> References: <20190619172147.putrwh2gusggdlby@yuggoth.org> <1560965046.07661974@emailsrvr.com> <9A85D2917C58154C960D95352B22818BD0771EE1@fmsmsx123.amr.corp.intel.com> Message-ID: <20190619203238.mhdkg2znedxh3acl@yuggoth.org> On 2019-06-19 19:22:11 +0000 (+0000), Jones, Bruce E wrote: [...] > Is the “provides calendar files in a popular standard format” this > [0]? I see that there are no StarlingX meetings listed there. > > [0] http://eavesdrop.openstack.org/irc-meetings.ical [...] I was referring more to the individual per-meeting files like http://eavesdrop.openstack.org/calendars/openstack-security-sig-meeting.ics (the overarching .ical file is mostly useful for finding meeting overlap in scheduling for a large community). Anyway, my point was not to promote specific technology, but rather to say that it's possible to coordinate meetings in a common timezone and that doing so in one which doesn't itself jump around at various times of year at least provides a stable point of reference and gives all attendees an equal chance of figuring out what that means for the particular bit of the planet on which they live (or happen to be visiting in a given week). I have no idea whether or not StarlingX holds meetings over IRC, but the reason you're not finding any StarlingX meetings in the list is that it's based on https://opendev.org/opendev/irc-meetings/src/branch/master/meetings into which nobody has yet added any StarlingX-specific entries. I don't see any reason, either with OpenDev sysadmin or OpenStack TC hats on, to consider that an OpenStack-only resource. I expect the domain name there will switch to opendev.org in the near-ish future at least, so if the current domain on the site URLs doesn't bother you and the StarlingX IRC meeting attendees would consider it useful then please feel free to push up additions through Gerrit. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Ghada.Khalil at windriver.com Wed Jun 19 00:37:27 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 19 Jun 2019 00:37:27 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com>, <6345119E91D5C843A93D64F498ACFA13745233C5@SHSMSX101.ccr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0C153D4F7@ALA-MBD.corp.ad.wrs.com> I'm not aware of a way to allow the use of ovs (w/o dpdk) in 2018.10 To use OVS, you will need to use a recent load built from master and follow the updated deployment instructions: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ The last green sanity was using ISO 20190613. You can use the symlink: latest_green_build or monitor the sanity emails sent regularly to the mailing list. Regards, Ghada From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 2:14 PM To: Zhao, Forrest; Xie, Cindy; Jones, Bruce E; Winnicki, Chris; Ildiko Vancsa; Khalil, Ghada; Miller, Frank; Arce Moreno, Abraham; Hazzim Anaya Casas?; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: StarlingX 2018.10 without DPDK query Hi Forrest & Team I agree that there is no such instruction mentioned in the document for 2018.10 AIO Simplex Mode for OVS. It's just that my system doesn't have DPDK support and I need to deploy 2018.10 release until Release 2.0 is released in August 2019 as per the plan. So Can I setup STARLINGX 2018.10 AIO SIMPLEX in my bare metal without DPDK support? Is there any flag that could be disabled or any workaround that can work for me? Looking Forward for your response. Regards Anirudh Gupta From: Zhao, Forrest Sent: Tuesday, 18 June, 7:28 PM Subject: RE: StarlingX 2018.10 without DPDK query To: Xie, Cindy, Anirudh Gupta, Jones, Bruce E, Winnicki, Chris, Ildiko Vancsa, Ghada Khalil?, Jones, Bruce E, Frank Miller, Arce Moreno, Abraham, Hazzim Anaya Casas?, Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io, starlingx-announce at lists.starlingx.io If I remember correctly, 2018.10 release only support OVS-DPDK as virtual switch. Ghada may be able to double confirm that. Also I can't find any instruction in 2018.10 AIO simplex deployment guide https://docs.starlingx.io/deployment_guides/current/simplex.html to set the virtual switch to OVS. From: Xie, Cindy Sent: Tuesday, June 18, 2019 8:35 PM To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando >; Zhao, Forrest > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query + Forrest, do you have good answer to Gupta about using OVS without DPDK for 2018.10 release? I understand that we have the containerized OVS option without DPDK with vswitch type to "none" in stx.2.0 today. Not sure about the behavior in 2018.10 release. From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 7:22 PM To: Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2018.10 without DPDK query Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be "none" and "ovs", using the below command system modify -vswitch_type=ovs system modify -vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in "degraded" state. Can't I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy > Sent: 18 June 2019 15:30 To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug'19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Jun 19 21:02:04 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 19 Jun 2019 14:02:04 -0700 Subject: [Starlingx-discuss] [multios] Flock systemd services cleanup for stx-3.0 In-Reply-To: References: <3e9d8190-1ebd-4bd9-f662-8956d0ab3bab@linux.intel.com> Message-ID: <603cb9d6-e84f-165b-5b7b-c2fb85e10750@linux.intel.com> On 6/19/19 1:26 PM, Peters, Matt wrote: > +1 > > Maybe this is out of scope, but I nice stretch objective would to also move away from having Service Management (SM) use OCF scripts for some of these services. We currently have 2 methods of launching services, one during the bootstrap that uses systemd, and then another that is managed by SM that invokes OCF scripts. SM is fully capable of managing services through systemd (and does for some), so converting these remaining ones would also cleanup the process management. > Matt, Would you be willing to create a Storyboard and tasks for the scripts / packages that need to be converted from OCF to systemd units? Thanks Sau! > -Matt > > > On 2019-06-19, 1:30 PM, "Saul Wold" wrote: > > > Hi Folks, > > One of the things that the Multi-OS work has shown is that a number of > the flock packages that use a hybrid of systemd and sysvinit. This means > that the sysvini scripts are called by systemd unit files. OBS and > rpmlint called out these as warnings. We will be creating a storyboard > that will list the services that need to be fully converted to systemd > unit files > > This would be a good set of tasks for community members since it will > provide good interactions with the flock source and testing to ensure > the serivces start up correctly. The work is also independent enough > that multiple people or teams can work on each package. There are a > number of resources on the web with information about converting from > sysvinit scripts to systemd unit files. > > This can be a low priority ongoing clean-up type of activity. > > Sau! > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From dtroyer at gmail.com Wed Jun 19 22:53:00 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 19 Jun 2019 17:53:00 -0500 Subject: [Starlingx-discuss] [stx-nova] Stein branch rebase Message-ID: I have done a preliminary rebase of the stx-nova Stein branch into stx/stein.2 [0]. It is passing unit and functional tests but since some changes were required from the current upstream reviews it really needs to be checked out further. * I started with upstream stable/stein acd2daa9 (current as of yesterday noon-ish) * Artom rebased the upstream NUMA patches in master so I pulled the current patchset of those * There is a missing import in 635229 that is causing the test failures, I inserted the commit adding that inline in the PR * There were conflicts in 634605 and 634606 due to ongoing development in master since stable/stein was branched. I made the obvious corrections, there may be more required that someone who is not familiar with this code (me) would likely miss. The final pep, unit and functional jobs are running under https://review.opendev.org/#/c/656065/8 and I expect them to pass. I do believe this requires the extraced placement to be merged so we may not be able to test it in StarlingX until that is complete. I am hoping this can be tested with just replacing the Nova docker image but I do not have the time to run through that. dt [0] PR: https://github.com/starlingx-staging/stx-nova/pull/25 -- Dean Troyer dtroyer at gmail.com From bruce.e.jones at intel.com Wed Jun 19 23:17:24 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 19 Jun 2019 23:17:24 +0000 Subject: [Starlingx-discuss] First contact SIG meeting Message-ID: <9A85D2917C58154C960D95352B22818BD077219B@fmsmsx123.amr.corp.intel.com> Reminder - the First Contact SIG is meeting tomorrow at 6:30 AM Pacific time. We plan to continue our discussion from the previous meeting on how to best welcome new community members and get them started with StarlingX. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Wed Jun 19 23:26:24 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 19 Jun 2019 23:26:24 +0000 Subject: [Starlingx-discuss] [stx-nova] Stein branch rebase In-Reply-To: References: Message-ID: <151EE31B9FCCA54397A757BC674650F0C153DA55@ALA-MBD.corp.ad.wrs.com> FYI. The external placement service code merged earlier today: https://review.opendev.org/#/c/662614/ https://review.opendev.org/#/c/662371/ Cheers, Ghada -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Wednesday, June 19, 2019 6:53 PM To: starlingx Subject: [Starlingx-discuss] [stx-nova] Stein branch rebase I have done a preliminary rebase of the stx-nova Stein branch into stx/stein.2 [0]. It is passing unit and functional tests but since some changes were required from the current upstream reviews it really needs to be checked out further. * I started with upstream stable/stein acd2daa9 (current as of yesterday noon-ish) * Artom rebased the upstream NUMA patches in master so I pulled the current patchset of those * There is a missing import in 635229 that is causing the test failures, I inserted the commit adding that inline in the PR * There were conflicts in 634605 and 634606 due to ongoing development in master since stable/stein was branched. I made the obvious corrections, there may be more required that someone who is not familiar with this code (me) would likely miss. The final pep, unit and functional jobs are running under https://review.opendev.org/#/c/656065/8 and I expect them to pass. I do believe this requires the extraced placement to be merged so we may not be able to test it in StarlingX until that is complete. I am hoping this can be tested with just replacing the Nova docker image but I do not have the time to run through that. dt [0] PR: https://github.com/starlingx-staging/stx-nova/pull/25 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Matt.Peters at windriver.com Thu Jun 20 00:24:50 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 20 Jun 2019 00:24:50 +0000 Subject: [Starlingx-discuss] [multios] Flock systemd services cleanup for stx-3.0 In-Reply-To: <603cb9d6-e84f-165b-5b7b-c2fb85e10750@linux.intel.com> References: <3e9d8190-1ebd-4bd9-f662-8956d0ab3bab@linux.intel.com> <603cb9d6-e84f-165b-5b7b-c2fb85e10750@linux.intel.com> Message-ID: <60533503-FFED-49B8-800A-610B2AC75573@windriver.com> Hi Saul, Yes, I can create a Storyboard for that. -Matt On 2019-06-19, 5:02 PM, "Saul Wold" wrote: On 6/19/19 1:26 PM, Peters, Matt wrote: > +1 > > Maybe this is out of scope, but I nice stretch objective would to also move away from having Service Management (SM) use OCF scripts for some of these services. We currently have 2 methods of launching services, one during the bootstrap that uses systemd, and then another that is managed by SM that invokes OCF scripts. SM is fully capable of managing services through systemd (and does for some), so converting these remaining ones would also cleanup the process management. > Matt, Would you be willing to create a Storyboard and tasks for the scripts / packages that need to be converted from OCF to systemd units? Thanks Sau! > -Matt > > > On 2019-06-19, 1:30 PM, "Saul Wold" wrote: > > > Hi Folks, > > One of the things that the Multi-OS work has shown is that a number of > the flock packages that use a hybrid of systemd and sysvinit. This means > that the sysvini scripts are called by systemd unit files. OBS and > rpmlint called out these as warnings. We will be creating a storyboard > that will list the services that need to be fully converted to systemd > unit files > > This would be a good set of tasks for community members since it will > provide good interactions with the flock source and testing to ensure > the serivces start up correctly. The work is also independent enough > that multiple people or teams can work on each package. There are a > number of resources on the web with information about converting from > sysvinit scripts to systemd unit files. > > This can be a low priority ongoing clean-up type of activity. > > Sau! > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From maria.g.perez.ibarra at intel.com Thu Jun 20 03:32:42 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 20 Jun 2019 03:32:42 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-19 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 13 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs The issue was that in duplex configuration Volumes could not been created. We'll investigate more on these problems in the next execution. Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Thu Jun 20 03:40:33 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Thu, 20 Jun 2019 11:40:33 +0800 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Message-ID: Dear all, My STX 1.0 will be automatically shutdown and power off. Where could i check the logs about this issue? Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Jun 20 05:56:01 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 20 Jun 2019 05:56:01 +0000 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> Hi Ezpeer, Is it virtual machine or bare-metal system? Is it a provisioned system? And what is the system configuration, simplex/duplex or multi node? StarlingX itself will not do auto-shutdown, but it may auto-reboot if there is critical error. Most of StarlingX’s log is at /var/log folder. You could run “collect” cmd in your fail system after the issue occur, and upload the generated logfile to somewhere others could access. Then I will have a check with it. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 11:41 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear all, My STX 1.0 will be automatically shutdown and power off. Where could i check the logs about this issue? Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Thu Jun 20 06:27:09 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Thu, 20 Jun 2019 14:27:09 +0800 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> Message-ID: Dear Shuicheng, Platform: STX 1.0 ( http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/latest_build/outputs/iso/ ) 1. bare-metal system , MY host pc Controller-0 shutdown without reboot 2. provisioned system , all-in-one simplex 3. collet cmd log file: https://drive.google.com/open?id=1D7yw_IiCPrDBCn6GmPK1GDC7FAdo5Wcr Thanks for your help. Best Regards Lin, Shuicheng 於 2019年6月20日 週四 下午1:56寫道: > Hi Ezpeer, > > Is it virtual machine or bare-metal system? > > Is it a provisioned system? And what is the system configuration, > simplex/duplex or multi node? > > StarlingX itself will not do auto-shutdown, but it may auto-reboot if > there is critical error. > > Most of StarlingX’s log is at /var/log folder. > > > > You could run “collect” cmd in your fail system after the issue occur, and > upload the generated logfile to somewhere others could access. > > Then I will have a check with it. > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Thursday, June 20, 2019 11:41 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] STX 1.0 automatically shutdown and power > off. > > > > Dear all, > > My STX 1.0 will be automatically shutdown and power off. > > Where could i check the logs about this issue? > > > Thanks a lot. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arjun.sundararajan at tatacommunications.com Thu Jun 20 06:30:37 2019 From: arjun.sundararajan at tatacommunications.com (Arjun Sundararajan) Date: Thu, 20 Jun 2019 06:30:37 +0000 Subject: [Starlingx-discuss] STX 2018.10 - PCI-Passthrough/PCI-SRIOV Message-ID: Hi All, We have a deployment of AIO - Duplex on couple of bare-metal servers. We are trying to set one of the interfaces as PCI-Passthrough/PCI-SRIOV (tried both), but, observing that in nova.conf, only whitelist is getting updated and alias has not been added. Since we are not able to find any documentations, could anyone kindly share the steps to configure. Appreciate any inputs on this. Thanks and regards, Arjun Sundararajan -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Thu Jun 20 08:15:37 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 20 Jun 2019 08:15:37 +0000 Subject: [Starlingx-discuss] Placement could not be started issue fix! Message-ID: <93814834B4855241994F290E959305C75309ADDD@SHSMSX104.ccr.corp.intel.com> Hi Ghada and all, I see placement already merged today. However, I found an obvious issue that we have to fix it before starting daily build sanity test!! I raised a LP and related fix as below https://bugs.launchpad.net/starlingx/+bug/1833497 placement po will not be started https://review.opendev.org/#/c/666491/ Root cause is in below patch https://review.opendev.org/#/c/653932/ It overrides openstack-compute-kit group which do not include openstack-placement. I have done basic verification (simplex deploy and vm creation), Please help get it merged soon, thanks! BTW, it is really not a good design. If other guy add new chart in compute-kit, he have to Update ovs and ironic related helm file as well. sysinv/helm/ironic.py sysinv/helm/openvswitch.py Thanks! Zhipeng -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: 2019年6月20日 7:26 To: Dean Troyer ; starlingx Subject: Re: [Starlingx-discuss] [stx-nova] Stein branch rebase FYI. The external placement service code merged earlier today: https://review.opendev.org/#/c/662614/ https://review.opendev.org/#/c/662371/ Cheers, Ghada -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Wednesday, June 19, 2019 6:53 PM To: starlingx Subject: [Starlingx-discuss] [stx-nova] Stein branch rebase I have done a preliminary rebase of the stx-nova Stein branch into stx/stein.2 [0]. It is passing unit and functional tests but since some changes were required from the current upstream reviews it really needs to be checked out further. * I started with upstream stable/stein acd2daa9 (current as of yesterday noon-ish) * Artom rebased the upstream NUMA patches in master so I pulled the current patchset of those * There is a missing import in 635229 that is causing the test failures, I inserted the commit adding that inline in the PR * There were conflicts in 634605 and 634606 due to ongoing development in master since stable/stein was branched. I made the obvious corrections, there may be more required that someone who is not familiar with this code (me) would likely miss. The final pep, unit and functional jobs are running under https://review.opendev.org/#/c/656065/8 and I expect them to pass. I do believe this requires the extraced placement to be merged so we may not be able to test it in StarlingX until that is complete. I am hoping this can be tested with just replacing the Nova docker image but I do not have the time to run through that. dt [0] PR: https://github.com/starlingx-staging/stx-nova/pull/25 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From shuicheng.lin at intel.com Thu Jun 20 08:41:52 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 20 Jun 2019 08:41:52 +0000 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> Hi Ezpeer, For the sm.log (/var/log/sm.log), it seems service process fail to run, and lead to sm think system is in unhealthy state, and try to reboot system to recover. It should be reboot, but not sure why it is shutdown in your environment and also auth.log shows “systemd-logind[893]: info Power key pressed./info Powering Off...” In the sm.log, I also see the network adapter is up/down randomly. It may be the cause of the service process failure. I also find VF Ethernet is enabled in the system also. To isolate the issue cause, could you help try to disable the VF Ethernet in BIOS, and use PF Ethernet only? Thanks. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 2:27 PM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, Platform: STX 1.0 ( http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/latest_build/outputs/iso/) 1. bare-metal system , MY host pc Controller-0 shutdown without reboot 2. provisioned system , all-in-one simplex 3. collet cmd log file: https://drive.google.com/open?id=1D7yw_IiCPrDBCn6GmPK1GDC7FAdo5Wcr Thanks for your help. Best Regards Lin, Shuicheng > 於 2019年6月20日 週四 下午1:56寫道: Hi Ezpeer, Is it virtual machine or bare-metal system? Is it a provisioned system? And what is the system configuration, simplex/duplex or multi node? StarlingX itself will not do auto-shutdown, but it may auto-reboot if there is critical error. Most of StarlingX’s log is at /var/log folder. You could run “collect” cmd in your fail system after the issue occur, and upload the generated logfile to somewhere others could access. Then I will have a check with it. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 11:41 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear all, My STX 1.0 will be automatically shutdown and power off. Where could i check the logs about this issue? Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Thu Jun 20 09:20:21 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Thu, 20 Jun 2019 17:20:21 +0800 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> Message-ID: Dear Shuicheng, In my environment, i need to test pci-sriov feature. My installation Step: 1. Install STX 1.0 (2018/10) all-in-one simplex 2. Configure pci-sriov interface 3. Create network and VM on one vf port 4. Issue occurred Best Regards Lin, Shuicheng 於 2019年6月20日 週四 下午4:41寫道: > Hi Ezpeer, > > For the sm.log (/var/log/sm.log), it seems service process fail to run, > and lead to sm think system is in unhealthy state, and try to reboot system > to recover. > > It should be reboot, but not sure why it is shutdown in your environment > and also auth.log shows “systemd-logind[893]: info Power key > pressed./info Powering Off...” > > > > In the sm.log, I also see the network adapter is up/down randomly. It may > be the cause of the service process failure. > > I also find VF Ethernet is enabled in the system also. > > To isolate the issue cause, could you help try to disable the VF Ethernet > in BIOS, and use PF Ethernet only? > > Thanks. > > > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Thursday, June 20, 2019 2:27 PM > *To:* Lin, Shuicheng > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] STX 1.0 automatically shutdown and > power off. > > > > Dear Shuicheng, > > Platform: STX 1.0 ( > http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/latest_build/outputs/iso/ > ) > > > > 1. bare-metal system , MY host pc Controller-0 shutdown without reboot > > 2. provisioned system , all-in-one simplex > > 3. collet cmd log file: > > https://drive.google.com/open?id=1D7yw_IiCPrDBCn6GmPK1GDC7FAdo5Wcr > > > > Thanks for your help. > > > > Best Regards > > > > Lin, Shuicheng 於 2019年6月20日 週四 下午1:56寫道: > > Hi Ezpeer, > > Is it virtual machine or bare-metal system? > > Is it a provisioned system? And what is the system configuration, > simplex/duplex or multi node? > > StarlingX itself will not do auto-shutdown, but it may auto-reboot if > there is critical error. > > Most of StarlingX’s log is at /var/log folder. > > > > You could run “collect” cmd in your fail system after the issue occur, and > upload the generated logfile to somewhere others could access. > > Then I will have a check with it. > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Thursday, June 20, 2019 11:41 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] STX 1.0 automatically shutdown and power > off. > > > > Dear all, > > My STX 1.0 will be automatically shutdown and power off. > > Where could i check the logs about this issue? > > > Thanks a lot. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Thu Jun 20 10:04:29 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Thu, 20 Jun 2019 18:04:29 +0800 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> Message-ID: Dear Shuicheng, Update new information I reinstall STX 1.0 (2018/10). My installation Step: 1. Install STX 1.0 (2018/10) all-in-one simplex 2. Configure pci-sriov interface # source /etc/nova/openrc #neutron providernet-create providernet-a --type=flat #neutron providernet-create providernet-b --type=vlan #neutron providernet-range-create --name providernet-b-range1 --range 100-400 providernet-b #system host-if-modify -c pci-sriov controller-0 enp2s0f0 -p providernet-a -N 7 #system host-if-modify -c data controller-0 enp2s0f1 -p providernet-b .....configure storage #system host-unlock controller-0 System auto reboot 3. After reboot to prompt, about 10-20 minutes issue occurred. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS Best Regards Ezpeer Chen 於 2019年6月20日 週四 下午5:20寫道: > Dear Shuicheng, > > > In my environment, i need to test pci-sriov feature. > > My installation Step: > > 1. Install STX 1.0 (2018/10) all-in-one simplex > > 2. Configure pci-sriov interface > > 3. Create network and VM on one vf port > > 4. Issue occurred > > > > > Best Regards > > > Lin, Shuicheng 於 2019年6月20日 週四 下午4:41寫道: > >> Hi Ezpeer, >> >> For the sm.log (/var/log/sm.log), it seems service process fail to run, >> and lead to sm think system is in unhealthy state, and try to reboot system >> to recover. >> >> It should be reboot, but not sure why it is shutdown in your environment >> and also auth.log shows “systemd-logind[893]: info Power key >> pressed./info Powering Off...” >> >> >> >> In the sm.log, I also see the network adapter is up/down randomly. It may >> be the cause of the service process failure. >> >> I also find VF Ethernet is enabled in the system also. >> >> To isolate the issue cause, could you help try to disable the VF Ethernet >> in BIOS, and use PF Ethernet only? >> >> Thanks. >> >> >> >> >> >> Best Regards >> >> Shuicheng >> >> >> >> *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] >> *Sent:* Thursday, June 20, 2019 2:27 PM >> *To:* Lin, Shuicheng >> *Cc:* starlingx-discuss at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] STX 1.0 automatically shutdown and >> power off. >> >> >> >> Dear Shuicheng, >> >> Platform: STX 1.0 ( >> http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/latest_build/outputs/iso/ >> ) >> >> >> >> 1. bare-metal system , MY host pc Controller-0 shutdown without reboot >> >> 2. provisioned system , all-in-one simplex >> >> 3. collet cmd log file: >> >> https://drive.google.com/open?id=1D7yw_IiCPrDBCn6GmPK1GDC7FAdo5Wcr >> >> >> >> Thanks for your help. >> >> >> >> Best Regards >> >> >> >> Lin, Shuicheng 於 2019年6月20日 週四 下午1:56寫道: >> >> Hi Ezpeer, >> >> Is it virtual machine or bare-metal system? >> >> Is it a provisioned system? And what is the system configuration, >> simplex/duplex or multi node? >> >> StarlingX itself will not do auto-shutdown, but it may auto-reboot if >> there is critical error. >> >> Most of StarlingX’s log is at /var/log folder. >> >> >> >> You could run “collect” cmd in your fail system after the issue occur, >> and upload the generated logfile to somewhere others could access. >> >> Then I will have a check with it. >> >> >> >> Best Regards >> >> Shuicheng >> >> >> >> *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] >> *Sent:* Thursday, June 20, 2019 11:41 AM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] STX 1.0 automatically shutdown and power >> off. >> >> >> >> Dear all, >> >> My STX 1.0 will be automatically shutdown and power off. >> >> Where could i check the logs about this issue? >> >> >> Thanks a lot. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Jun 20 11:21:47 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 20 Jun 2019 11:21:47 +0000 Subject: [Starlingx-discuss] Placement could not be started issue fix! In-Reply-To: <93814834B4855241994F290E959305C75309ADDD@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C75309ADDD@SHSMSX104.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC255573D@ALA-MBD.corp.ad.wrs.com> Zhipeng, See inline. Brent -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Thursday, June 20, 2019 4:16 AM To: Khalil, Ghada ; Dean Troyer ; starlingx Subject: [Starlingx-discuss] Placement could not be started issue fix! Hi Ghada and all, I see placement already merged today. However, I found an obvious issue that we have to fix it before starting daily build sanity test!! I raised a LP and related fix as below https://bugs.launchpad.net/starlingx/+bug/1833497 placement po will not be started https://review.opendev.org/#/c/666491/ Root cause is in below patch https://review.opendev.org/#/c/653932/ It overrides openstack-compute-kit group which do not include openstack-placement. I have done basic verification (simplex deploy and vm creation), Please help get it merged soon, thanks! BTW, it is really not a good design. If other guy add new chart in compute-kit, he have to Update ovs and ironic related helm file as well. sysinv/helm/ironic.py sysinv/helm/openvswitch.py [BR] Please open a LP for this. Thanks! Zhipeng -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: 2019年6月20日 7:26 To: Dean Troyer ; starlingx Subject: Re: [Starlingx-discuss] [stx-nova] Stein branch rebase FYI. The external placement service code merged earlier today: https://review.opendev.org/#/c/662614/ https://review.opendev.org/#/c/662371/ Cheers, Ghada -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Wednesday, June 19, 2019 6:53 PM To: starlingx Subject: [Starlingx-discuss] [stx-nova] Stein branch rebase I have done a preliminary rebase of the stx-nova Stein branch into stx/stein.2 [0]. It is passing unit and functional tests but since some changes were required from the current upstream reviews it really needs to be checked out further. * I started with upstream stable/stein acd2daa9 (current as of yesterday noon-ish) * Artom rebased the upstream NUMA patches in master so I pulled the current patchset of those * There is a missing import in 635229 that is causing the test failures, I inserted the commit adding that inline in the PR * There were conflicts in 634605 and 634606 due to ongoing development in master since stable/stein was branched. I made the obvious corrections, there may be more required that someone who is not familiar with this code (me) would likely miss. The final pep, unit and functional jobs are running under https://review.opendev.org/#/c/656065/8 and I expect them to pass. I do believe this requires the extraced placement to be merged so we may not be able to test it in StarlingX until that is complete. I am hoping this can be tested with just replacing the Nova docker image but I do not have the time to run through that. dt [0] PR: https://github.com/starlingx-staging/stx-nova/pull/25 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Thu Jun 20 13:55:33 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 20 Jun 2019 13:55:33 +0000 Subject: [Starlingx-discuss] [starlingx] about https://review.opendev.org/666491 In-Reply-To: References: Message-ID: <151EE31B9FCCA54397A757BC674650F0C153DC12@ALA-MBD.corp.ad.wrs.com> Hi Austin, I don't know enough technical details to respond to your question. I suggest you add your comments to the review so that the core reviewers can see it and comment on it. Ghada From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Thursday, June 20, 2019 2:25 AM To: Rowsell, Brent; Khalil, Ghada; Qi, Mingyuan Subject: [starlingx] about https://review.opendev.org/666491 Hi Brent & Ghada: About this LP, Just want to get some more clearly info. For those charts , we will change node_selector_key to some new key like openstack-additional-plane. In AIO (Simplex /Duplex) system : The Openstack- additional-plane is not enabled. Then those services should not be started Shall we force deny if adding label "Openstack- additional-plane" ? And for multi-node, The Openstack- additional-plane is enabled. Then those services should be started . is this proper approach ? Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Jun 20 14:01:53 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 20 Jun 2019 14:01:53 +0000 Subject: [Starlingx-discuss] First contact SIG meeting minutes June 20th 2019 Message-ID: <9A85D2917C58154C960D95352B22818BD077236B@fmsmsx123.amr.corp.intel.com> Agenda for the June 20th meeting Notes can also be found on the etherpad: https://etherpad.openstack.org/p/stx-first-contact Attendees: Bruce, Chris Winnicki, Al Bailey, Bill Zvonar * Need a volunteer to lead this while Bruce is out (July, August) - Bill volunteers * What do new contributors need? See input from Matthew below (in the etherpad) * How can we help them? ? It's hard to help people sometimes - there are notes on the list with minimal data from people with complex configurations, but without the data needed to triage the problems. ? Al -we should have a template of standard questions that we ask people who need help - what load are you running, what configuration are you using, what kind of network setup, storage, etc... ? How can we figure out who the right person to help even is? E.g. recent questions on SRIOV / passthrough ? Do OpenStack projects use forums? Bruce - I think the primary comms channels in openstack are IRC and email. ? We don't have a working example of a running system that people can look at * Managing machines on the external network can be scary from an IT point of view * Packet.com infrastructure could be used for this * Bruce was pushing on a StarlingX in a Box project but we have backed off on this * We did a StarlingX hands on workshop at OpenInfra Denver and there is a Chinese workshop coming up that will also provide hands on o We could keep some of the packet machines allocated to hands on but this means on-going support work for a volunteer or two ? One thing - when code is changed, we should have a documentation update submitted as well (if needed) * What can we learn / borrow from the OpenStack First Contact SIG? https://wiki.openstack.org/wiki/First_Contact_SIG * How do we manage the work of helping new contributors? ? We don't have to (nor can we) do all this work ourselves. We can also drive work items into the broader project e.g. documentation. * What tools, documents, improvements, etc... are needed? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Thu Jun 20 04:19:23 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Thu, 20 Jun 2019 04:19:23 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C153D4F7@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com>, <6345119E91D5C843A93D64F498ACFA13745233C5@SHSMSX101.ccr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0C153D4F7@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Team, I am trying to install StarlingX AIO Simplex 2018.10 on one of the HP Server but facing issue in Unlocking the Host [root at controller-0 ~(keystone_admin)]# system host-unlock controller-0 Rejected: Total allocated memory exceeds the total memory of controller-0 numa node 0 The below is the server's configuration: Memory - 16GB CPU cores - 24 Hard disk - 600 GB Numa nodes - 2 [root at controller-0 ~(keystone_admin)]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 44 Model name: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz Stepping: 2 CPU MHz: 1600.000 CPU max MHz: 2666.0000 CPU min MHz: 1600.0000 BogoMIPS: 5333.56 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 12288K NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb tpr_shadow vnmi flexpriority ept vpid dtherm ida arat I am attaching the /var/log for your reference. Can you please help me in resolving the issue. Regards Anirudh GUPTA From: Khalil, Ghada Sent: 19 June 2019 06:07 To: Anirudh Gupta ; Zhao, Forrest ; Xie, Cindy ; Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Miller, Frank ; Arce Moreno, Abraham ; Hazzim Anaya Casas? ; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query I'm not aware of a way to allow the use of ovs (w/o dpdk) in 2018.10 To use OVS, you will need to use a recent load built from master and follow the updated deployment instructions: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ The last green sanity was using ISO 20190613. You can use the symlink: latest_green_build or monitor the sanity emails sent regularly to the mailing list. Regards, Ghada From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 2:14 PM To: Zhao, Forrest; Xie, Cindy; Jones, Bruce E; Winnicki, Chris; Ildiko Vancsa; Khalil, Ghada; Miller, Frank; Arce Moreno, Abraham; Hazzim Anaya Casas?; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: StarlingX 2018.10 without DPDK query Hi Forrest & Team I agree that there is no such instruction mentioned in the document for 2018.10 AIO Simplex Mode for OVS. It's just that my system doesn't have DPDK support and I need to deploy 2018.10 release until Release 2.0 is released in August 2019 as per the plan. So Can I setup STARLINGX 2018.10 AIO SIMPLEX in my bare metal without DPDK support? Is there any flag that could be disabled or any workaround that can work for me? Looking Forward for your response. Regards Anirudh Gupta From: Zhao, Forrest Sent: Tuesday, 18 June, 7:28 PM Subject: RE: StarlingX 2018.10 without DPDK query To: Xie, Cindy, Anirudh Gupta, Jones, Bruce E, Winnicki, Chris, Ildiko Vancsa, Ghada Khalil?, Jones, Bruce E, Frank Miller, Arce Moreno, Abraham, Hazzim Anaya Casas?, Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io, starlingx-announce at lists.starlingx.io If I remember correctly, 2018.10 release only support OVS-DPDK as virtual switch. Ghada may be able to double confirm that. Also I can't find any instruction in 2018.10 AIO simplex deployment guide https://docs.starlingx.io/deployment_guides/current/simplex.html to set the virtual switch to OVS. From: Xie, Cindy Sent: Tuesday, June 18, 2019 8:35 PM To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando >; Zhao, Forrest > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query + Forrest, do you have good answer to Gupta about using OVS without DPDK for 2018.10 release? I understand that we have the containerized OVS option without DPDK with vswitch type to "none" in stx.2.0 today. Not sure about the behavior in 2018.10 release. From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 7:22 PM To: Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2018.10 without DPDK query Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be "none" and "ovs", using the below command system modify -vswitch_type=ovs system modify -vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in "degraded" state. Can't I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy > Sent: 18 June 2019 15:30 To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug'19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logs.tar.gz Type: application/x-gzip Size: 4132300 bytes Desc: logs.tar.gz URL: From Anirudh.Gupta at hsc.com Thu Jun 20 13:25:15 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Thu, 20 Jun 2019 13:25:15 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C153D4F7@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com>, <6345119E91D5C843A93D64F498ACFA13745233C5@SHSMSX101.ccr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0C153D4F7@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Cindy, Yong, Ghada & Team, I need to create AIO 2018.10 Simplex Bare Metal Setup. As discussed in the mail chain, I have arranged a server with DPDK NIC and tried installing the ISO downloaded from the link http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ The server configuration is as below: CPU core - 16 Hard disk - 2 Hardisk each of 930GB RAM - 64 GB DPDK supported NIC - Yes controller-0:~$ lspci | grep -i ethernet 03:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T 03:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T 06:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 06:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) When I ran the config_controller command, it throws me an error in 02/08 Step : Applying Bootstrap Manifest" EXT4-fs error (device drb0): ext4_journal_check_start:56: Detected aborted journal EXT4-fs error(dbr0): Remounting filesystem read-only The completed screenshot of the error is attached in the mail. I also tried following the below mailing list http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/003140.html There are 2 Hard-disk attached in the server each of 930 GB I had tried configured my server in RAID 0 and RAID 1, but getting the same error of running the config_controller command in the step 2. I am also attaching the complete logs of path /var/log/ for your reference. Please help me in resolving the issue. Regards Anirudh Gupta (Senior Engineer) From: Khalil, Ghada Sent: 19 June 2019 06:07 To: Anirudh Gupta ; Zhao, Forrest ; Xie, Cindy ; Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Miller, Frank ; Arce Moreno, Abraham ; Hazzim Anaya Casas? ; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query I'm not aware of a way to allow the use of ovs (w/o dpdk) in 2018.10 To use OVS, you will need to use a recent load built from master and follow the updated deployment instructions: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ The last green sanity was using ISO 20190613. You can use the symlink: latest_green_build or monitor the sanity emails sent regularly to the mailing list. Regards, Ghada From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 2:14 PM To: Zhao, Forrest; Xie, Cindy; Jones, Bruce E; Winnicki, Chris; Ildiko Vancsa; Khalil, Ghada; Miller, Frank; Arce Moreno, Abraham; Hazzim Anaya Casas?; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: StarlingX 2018.10 without DPDK query Hi Forrest & Team I agree that there is no such instruction mentioned in the document for 2018.10 AIO Simplex Mode for OVS. It's just that my system doesn't have DPDK support and I need to deploy 2018.10 release until Release 2.0 is released in August 2019 as per the plan. So Can I setup STARLINGX 2018.10 AIO SIMPLEX in my bare metal without DPDK support? Is there any flag that could be disabled or any workaround that can work for me? Looking Forward for your response. Regards Anirudh Gupta From: Zhao, Forrest Sent: Tuesday, 18 June, 7:28 PM Subject: RE: StarlingX 2018.10 without DPDK query To: Xie, Cindy, Anirudh Gupta, Jones, Bruce E, Winnicki, Chris, Ildiko Vancsa, Ghada Khalil?, Jones, Bruce E, Frank Miller, Arce Moreno, Abraham, Hazzim Anaya Casas?, Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io, starlingx-announce at lists.starlingx.io If I remember correctly, 2018.10 release only support OVS-DPDK as virtual switch. Ghada may be able to double confirm that. Also I can't find any instruction in 2018.10 AIO simplex deployment guide https://docs.starlingx.io/deployment_guides/current/simplex.html to set the virtual switch to OVS. From: Xie, Cindy Sent: Tuesday, June 18, 2019 8:35 PM To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando >; Zhao, Forrest > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query + Forrest, do you have good answer to Gupta about using OVS without DPDK for 2018.10 release? I understand that we have the containerized OVS option without DPDK with vswitch type to "none" in stx.2.0 today. Not sure about the behavior in 2018.10 release. From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 7:22 PM To: Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2018.10 without DPDK query Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be "none" and "ovs", using the below command system modify -vswitch_type=ovs system modify -vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in "degraded" state. Can't I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy > Sent: 18 June 2019 15:30 To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug'19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_20190620_182744.jpg Type: image/jpeg Size: 3415081 bytes Desc: IMG_20190620_182744.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: EXT4-fs-error.tar.gz Type: application/x-gzip Size: 495702 bytes Desc: EXT4-fs-error.tar.gz URL: From Anirudh.Gupta at hsc.com Thu Jun 20 13:29:39 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Thu, 20 Jun 2019 13:29:39 +0000 Subject: [Starlingx-discuss] Recall: StarlingX 2018.10 without DPDK query Message-ID: Anirudh Gupta would like to recall the message, " StarlingX 2018.10 without DPDK query". DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. From bruce.e.jones at intel.com Thu Jun 20 17:00:15 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 20 Jun 2019 17:00:15 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BD07729A2@fmsmsx123.amr.corp.intel.com> Ghada, Bill - taking this to email since there is no release meeting today. This isn't the Green Sanity we were hoping for, but is it green enough to declare the milestone? I would support declaring MS-3 based on these results. brucej From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, June 19, 2019 8:33 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-19 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 13 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs The issue was that in duplex configuration Volumes could not been created. We'll investigate more on these problems in the next execution. Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Jun 20 19:07:17 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 20 Jun 2019 19:07:17 +0000 Subject: [Starlingx-discuss] Performance Tests for STX 1.0 In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BD0772B89@fmsmsx123.amr.corp.intel.com> I think this is a really good start. I'd like to see some metrics added to measure VM and Container application recovery time, which may be outside the OPNFV framework. brucej -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Tuesday, June 18, 2019 6:54 AM To: Ezpeer Chen ; Zhao, Forrest Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Performance Tests for STX 1.0 Hi Ezpeer Thanks a lot for your mail. We are working on a full plan for performance testing in incoming release The initial draft presentation is here: https://drive.google.com/open?id=1Nr12zDRXf34kpjiA0LsFLU8GIpMY8Y-H2zmiC96CD4A The base of the strategy will be base on OPNFV, as described in the presentation Thanks a lot Victor Rodriguez On Mon, Jun 17, 2019 at 10:05 PM Ezpeer Chen wrote: > > Dear all, > > Where could i find the test plan or reports about performance Tests for STX 1.0? > > Thanks a lot. > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From erich.cordoba.malibran at intel.com Thu Jun 20 20:42:36 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Thu, 20 Jun 2019 20:42:36 +0000 Subject: [Starlingx-discuss] [build] mirror-check.sh to verify updates in upstream. Message-ID: Hi, According to our today's build meeting I just want to share this script[0]. What it does is to go through the rpms defined in the .lst files and then, using repoquery, verify is there's a new version of a package available in upstream (centos) servers. As it might be of the interest of the community to automate this script (in cengn or other server) I would like to share as well this Gitlab CI job[1] as an example on how I have setup this script. Here's also how the output looks like[2]. Currently, the script reports the packages detailed below. The updating is being tracked by this bug[3] I hope this can be interesting to someone. -Erich - [0] https://opendev.org/starlingx/tools/src/branch/master/centos-mirror-tools/mirror-check.sh - [1] https://gitlab.com/erichcm/stx-mirror-check - [2] https://gitlab.com/erichcm/stx-mirror-check/-/jobs/236723311 - [3] https://bugs.launchpad.net/starlingx/+bug/1817351 Package lighttpd-1.4.52-1.el7.src not found, available lighttpd-1.4.54-1.el7.src Package perl-generators-1.08-6.el7.noarch not found, available perl-generators-1.08-7.el7.noarch Package pyflakes-1.3.0-2.el7.noarch not found, available pyflakes-0.9.2-1.el7.noarch Package python2-certifi-2018.10.15-1.el7.noarch not found, available python2-certifi-2018.10.15-5.el7.noarch Package python2-ddt-1.1.3-1.el7.noarch not found, available python2-ddt-1.2.0-2.el7.noarch Package python2-iso8601-0.1.11-7.el7.noarch not found, available python2-iso8601-0.1.11-8.el7.noarch Package python2-jsonschema-2.5.1-3.el7.noarch not found, available python2-jsonschema-2.6.0-2.el7.noarch Package python2-mccabe-0.6.1-6.el7.noarch not found, available python2-mccabe-0.6.1-7.el7.noarch Package python2-mimeparse-1.6.0-4.el7.noarch not found, available python2-mimeparse-1.6.0-5.el7.noarch Package python2-olefile-0.46-1.el7.noarch not found, available python2-olefile-0.46-2.el7.noarch Package python2-pika-0.10.0-9.el7.noarch not found, available python2-pika-0.10.0-10.el7.noarch Package python2-PyMySQL-0.9.2-1.el7.noarch not found, available python2-PyMySQL-0.9.2-2.el7.noarch Package python2-pyngus-2.2.4-1.el7.noarch not found, available python2-pyngus-2.3.0-1.el7.noarch Package python2-rpm-macros-3-22.el7.noarch not found, available python2-rpm-macros-3-24.el7.noarch Package python2-sphinx_rtd_theme-0.2.4-2.el7.0.noarch not found, available python2-sphinx_rtd_theme-0.2.4-3.el7.noarch Package python2-whoosh-2.7.4-3.el7.noarch not found, available python2-whoosh-2.7.4-5.el7.noarch Package python-contextlib2-0.5.1-2.el7.noarch not found, available python-contextlib2-0.5.1-3.el7.noarch Package python-rpm-macros-3-22.el7.noarch not found, available python-rpm-macros-3-24.el7.noarch Package python-srpm-macros-3-22.el7.noarch not found, available python-srpm-macros-3-24.el7.noarch Package libcmocka-1.1.3-1.el7.x86_64 not found, available libcmocka-1.1.5-1.el7.x86_64 Package libcmocka-devel-1.1.3-1.el7.x86_64 not found, available libcmocka-devel-1.1.5-1.el7.x86_64 Package libzstd-1.3.8-1.el7.x86_64 not found, available libzstd-1.4.0-1.el7.x86_64 Package openjpeg2-2.3.0-6.el7.x86_64 not found, available openjpeg2-2.3.1-1.el7.x86_64 Package python2-qpid-proton-0.24.0-2.el7.x86_64 not found, available python2-qpid-proton-0.28.0-1.el7.x86_64 Package python2-simplejson-3.10.0-1.el7.x86_64 not found, available python2-simplejson-3.10.0-7.el7.x86_64 Package qpid-proton-c-0.24.0-2.el7.x86_64 not found, available qpid-proton-c-0.28.0-1.el7.x86_64 Package python2-pysocks-1.6.8-5.el7.noarch not found, available python2-pysocks-1.6.8-6.el7.noarch Package python2-scapy-2.4.0-2.el7.noarch not found, available python2-scapy-2.4.0-3.el7.noarch Package collectd-5.8.0-4.el7.x86_64 not found, available collectd-5.8.1-4.el7.x86_64 Package containernetworking-cni-0.5.1-1.el7.x86_64 not found, available Package cppcheck-1.84-1.el7.x86_64 not found, available cppcheck-1.87-1.el7.x86_64 Package ntfs-3g-2017.3.23-6.el7.x86_64 not found, available ntfs-3g-2017.3.23-11.el7.x86_64 Package ntfs-3g-devel-2017.3.23-6.el7.x86_64 not found, available ntfs-3g-devel-2017.3.23-11.el7.x86_64 Package ntfsprogs-2017.3.23-6.el7.x86_64 not found, available ntfsprogs-2017.3.23-11.el7.x86_64 Package python2-msgpack-0.5.6-4.el7.x86_64 not found, available python2-msgpack-0.6.1-2.el7.x86_64 From Ian.Jolliffe at windriver.com Thu Jun 20 21:53:14 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Thu, 20 Jun 2019 21:53:14 +0000 Subject: [Starlingx-discuss] [TSC] Minutes 6/13 and 6/20 Message-ID: <898B6FF9-112A-4F68-8B92-4322D7C487BE@windriver.com> Apologies for the delay on last week’s notes. Minutes 6/20/2019: STX workshop tomorrow - 90 people registered, 70 environments to try out Ildiko attending and Packet providing infra OpenInfra meet-up in Beijing Co-located with networking summit in Beijing Yong – Edge solution introduction - overview of some other Edge technologies R3 Planning Infrastructure and Cluster Monitoring, for this https://review.opendev.org/#/c/665208/ Ask for TSC to review and comment - we could start early as this is benign to R2 work Redfish Integration Spec in progress - reviews started to enable prototyping TSN support , https://review.opendev.org/655833 Now looks like more than a documentation effort, may need some additional patches. Bruce to connect up with Forest - please add a comment to the spec. Containerize CEPH, https://review.opendev.org/656371 This would leverage Rook? Are there some preliminary components that could be done - assuming we have the bandwidth and review capacity to start the work Due to short cycle completion in R3 is not viewed to be feasible at this time K8S device plugin integration Intel GPU some initial reviews underway Nvidia GPU Intel FPGA spec to come - likely too big for R3 QAT also proposed for R3 Sysvinit -> systemd conversion/cleanup (low) new contributor work - something to work along the way Action : Ask of release time - recommend spec freeze date Minutes 6/13/2019: R3 Planning discussion Close on table stake features for the next release, details https://etherpad.openstack.org/p/stx-r3-feature-candidates Distributed Cloud (deferred from R2) Backup and Restore (deferred from R2) Upversion Kubernetes and dependencies (docker, calico, helm etc) Openstack upversion to Train Above 4 items - are agreed to be in. Upgrade from R2->R3 May not be feasible given the short release cycle as per etherpad Start in R3 time frame - potentially deliver as a 3.1 in X a few months later Python 2->3 cutover What is left? Saul is working to identify remaining User Space packages - Saul to get info and report back Host and Openstack can be different as they are containerized Mandatory due to obsolescence of Python2 Add containerized Fault to the list - another deferral New features Infrastructure and Cluster Monitoring - spec is coming - next TSC call MultiOS - continue investigation with OpenSuse Ubuntu/Deb - also of interest in Banking - Shuquan K8S uprev in R2 1.15 - releasing next week - we should/would move to this release risk - low technically reward - less technical debt agreed to move forward Action : Brent will send a note to the ML to document exception to make it easy to find. Centos 8 - initial work is underway - more of a heads up for post R3 work Regards; Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Jun 20 21:57:03 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 20 Jun 2019 21:57:03 +0000 Subject: [Starlingx-discuss] New wiki home page - version 2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007A7CA86@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BD07624C2@fmsmsx123.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A7CA86@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BD0772F7D@fmsmsx123.amr.corp.intel.com> Thank you for the feedback. I agree with it. My wiki skills aren't strong enough to know how to control the white space. The draft new wiki page is basically just a big table. Perhaps someone stronger in the force ^H^H^H^H wiki knows how to make it more compact?... brucej -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Thursday, June 13, 2019 6:28 AM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: RE: New wiki home page - version 2.0 Hi Bruce, my 2 cents. - I like the way it's more centered around Community, and particularly newcomers - I think the page should cater to newcomers, but at the same time still be a useful dashboard for non-newcomers - removing most of the text that's on our current wiki is a good start for this, I think - visually, I like the way the OpenStack page has very little blank space in the 8 'cells' of the Contributor Resources section - I think it'd be good if we could mimic this more (there's a fair bit of whitespace on the draft page) - the " Select the way you want to contribute..." bar would be a good thing to take from the OpenStack page too - something like it, we don't have the underlying material for a few of those buttons (yet) Bill... From: Jones, Bruce E Sent: Friday, June 7, 2019 6:02 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] New wiki home page - version 2.0 I have completely changed my draft of a new wiki home page, to follow the example set by https://www.openstack.org/community.  Please take a look and let me know if you like it (or not).  You can find the new draft StarlingX wiki page at https://wiki.openstack.org/wiki/StarlingX/Draft_new_wiki_home_page Thank you!         brucej From Ghada.Khalil at windriver.com Thu Jun 20 23:01:53 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 20 Jun 2019 23:01:53 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 In-Reply-To: <9A85D2917C58154C960D95352B22818BD07729A2@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD07729A2@fmsmsx123.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0C153DE6B@ALA-MBD.corp.ad.wrs.com> Hi Bruce, My understanding is that sanity would be red with this load if run on a virtual env (or if run with vswitch_type = none) due to: https://bugs.launchpad.net/starlingx/+bug/1833497 I don't see results from a virtual env sanity below. Maria, Can you comment on whether the virtual env sanity was attempted or not? Thanks, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 20, 2019 1:00 PM To: Khalil, Ghada; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Ghada, Bill - taking this to email since there is no release meeting today. This isn't the Green Sanity we were hoping for, but is it green enough to declare the milestone? I would support declaring MS-3 based on these results. brucej From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, June 19, 2019 8:33 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-19 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 13 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs The issue was that in duplex configuration Volumes could not been created. We'll investigate more on these problems in the next execution. Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Jun 20 23:18:18 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 20 Jun 2019 23:18:18 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C153DE6B@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BD07729A2@fmsmsx123.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0C153DE6B@ALA-MBD.corp.ad.wrs.com> Message-ID: Hello Ghada, The execution in virtual environment takes more time than baremetal. Yesterday's execution started late and the virtual environment wasn't attempted. For today's build we are working on gathering results and in a final debug, we are seeing possible issues launching VM that only applies in a virtual environment and creating the appropriate Launchpad. Regards Maria G. From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Thursday, June 20, 2019 6:02 PM To: Jones, Bruce E ; Zvonar, Bill ; Perez Ibarra, Maria G Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Hi Bruce, My understanding is that sanity would be red with this load if run on a virtual env (or if run with vswitch_type = none) due to: https://bugs.launchpad.net/starlingx/+bug/1833497 I don't see results from a virtual env sanity below. Maria, Can you comment on whether the virtual env sanity was attempted or not? Thanks, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 20, 2019 1:00 PM To: Khalil, Ghada; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Ghada, Bill - taking this to email since there is no release meeting today. This isn't the Green Sanity we were hoping for, but is it green enough to declare the milestone? I would support declaring MS-3 based on these results. brucej From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, June 19, 2019 8:33 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-19 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 13 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs The issue was that in duplex configuration Volumes could not been created. We'll investigate more on these problems in the next execution. Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Jun 20 23:27:47 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 20 Jun 2019 23:27:47 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD07729A2@fmsmsx123.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0C153DE6B@ALA-MBD.corp.ad.wrs.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0C153DEAB@ALA-MBD.corp.ad.wrs.com> Thanks Maria. I expect you will have issues as per the bug below which will impact nova/VM functionality, so you may need to wait for tonight's build for a virtual env sanity. Regards, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, June 20, 2019 7:18 PM To: Khalil, Ghada; Jones, Bruce E; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Hello Ghada, The execution in virtual environment takes more time than baremetal. Yesterday's execution started late and the virtual environment wasn't attempted. For today's build we are working on gathering results and in a final debug, we are seeing possible issues launching VM that only applies in a virtual environment and creating the appropriate Launchpad. Regards Maria G. From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Thursday, June 20, 2019 6:02 PM To: Jones, Bruce E >; Zvonar, Bill >; Perez Ibarra, Maria G > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Hi Bruce, My understanding is that sanity would be red with this load if run on a virtual env (or if run with vswitch_type = none) due to: https://bugs.launchpad.net/starlingx/+bug/1833497 I don't see results from a virtual env sanity below. Maria, Can you comment on whether the virtual env sanity was attempted or not? Thanks, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 20, 2019 1:00 PM To: Khalil, Ghada; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Ghada, Bill - taking this to email since there is no release meeting today. This isn't the Green Sanity we were hoping for, but is it green enough to declare the milestone? I would support declaring MS-3 based on these results. brucej From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, June 19, 2019 8:33 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190619 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-19 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 13 TCs FAIL Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs The issue was that in duplex configuration Volumes could not been created. We'll investigate more on these problems in the next execution. Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Jun 20 23:43:45 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 20 Jun 2019 23:43:45 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190620 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-20 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs | 35 TCs Fail Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs | 33 TCs Fail Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs | 33 TCs Fail Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs | 33 TCs Fail Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs VM in ERROR state due to not valid host found https://bugs.launchpad.net/starlingx/+bug/1833632 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Fri Jun 21 08:08:18 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Fri, 21 Jun 2019 16:08:18 +0800 Subject: [Starlingx-discuss] How to turn off fault management? Message-ID: Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex How could i turn off fault management which cause my system(controller-0) reboot or power-off? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tao.Liu at windriver.com Fri Jun 21 13:33:03 2019 From: Tao.Liu at windriver.com (Liu, Tao) Date: Fri, 21 Jun 2019 13:33:03 +0000 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: References: Message-ID: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> Hi Ezpeer, The fault management reports fault conditions and significant events in the system and it does not reboot or power off the controller. The maintenance system takes proper actions to recover the system When necessary. I suggest you to view the active alarms and event history to see what failures might lead to reboot the controller for recovery. (could it be a configuration failure?). fm alarm-list fm event-list In addition, /var/log/mtcAgent.log provides more details on why the host is reboot or power-off. Regards, Tao From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 21, 2019 4:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to turn off fault management? Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex How could i turn off fault management which cause my system(controller-0) reboot or power-off? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Jun 21 17:01:19 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 21 Jun 2019 17:01:19 +0000 Subject: [Starlingx-discuss] About NEV SDK In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BD0773A26@fmsmsx123.amr.corp.intel.com> Sorry for the late reply, I’m catching up on email. The Intel NEV SDK is now called OpenNESS (https://www.openness.org/) and as far as I can tell, has not yet released any code. I’m expecting it to be released “soon” but don’t have a date. Once the code is out we’d welcome help from the community in integrating it into StarlingX. brucej From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Wednesday, June 12, 2019 11:29 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] About NEV SDK Dear all, When will StarlingX integrated with NEV SDK? Any schedule plan about this feature? Thanks a lot. Regards Fillmore -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Fri Jun 21 21:24:20 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 21 Jun 2019 21:24:20 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190621 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-21 (link) Status: Green ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Sat Jun 22 03:27:53 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Sat, 22 Jun 2019 03:27:53 +0000 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FEF7810@SHSMSX101.ccr.corp.intel.com> Hi Ezpeer, I checked the log, it is the same reason as previous. Have you tried to create VM without pci-sriov? Will the issue still occur? If not, then we could narrow down to focus on the pci-sriov part. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 6:04 PM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, Update new information I reinstall STX 1.0 (2018/10). My installation Step: 1. Install STX 1.0 (2018/10) all-in-one simplex 2. Configure pci-sriov interface # source /etc/nova/openrc #neutron providernet-create providernet-a --type=flat #neutron providernet-create providernet-b --type=vlan #neutron providernet-range-create --name providernet-b-range1 --range 100-400 providernet-b #system host-if-modify -c pci-sriov controller-0 enp2s0f0 -p providernet-a -N 7 #system host-if-modify -c data controller-0 enp2s0f1 -p providernet-b .....configure storage #system host-unlock controller-0 System auto reboot 3. After reboot to prompt, about 10-20 minutes issue occurred. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS Best Regards Ezpeer Chen > 於 2019年6月20日 週四 下午5:20寫道: Dear Shuicheng, In my environment, i need to test pci-sriov feature. My installation Step: 1. Install STX 1.0 (2018/10) all-in-one simplex 2. Configure pci-sriov interface 3. Create network and VM on one vf port 4. Issue occurred Best Regards Lin, Shuicheng > 於 2019年6月20日 週四 下午4:41寫道: Hi Ezpeer, For the sm.log (/var/log/sm.log), it seems service process fail to run, and lead to sm think system is in unhealthy state, and try to reboot system to recover. It should be reboot, but not sure why it is shutdown in your environment and also auth.log shows “systemd-logind[893]: info Power key pressed./info Powering Off...” In the sm.log, I also see the network adapter is up/down randomly. It may be the cause of the service process failure. I also find VF Ethernet is enabled in the system also. To isolate the issue cause, could you help try to disable the VF Ethernet in BIOS, and use PF Ethernet only? Thanks. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 2:27 PM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, Platform: STX 1.0 ( http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/latest_build/outputs/iso/) 1. bare-metal system , MY host pc Controller-0 shutdown without reboot 2. provisioned system , all-in-one simplex 3. collet cmd log file: https://drive.google.com/open?id=1D7yw_IiCPrDBCn6GmPK1GDC7FAdo5Wcr Thanks for your help. Best Regards Lin, Shuicheng > 於 2019年6月20日 週四 下午1:56寫道: Hi Ezpeer, Is it virtual machine or bare-metal system? Is it a provisioned system? And what is the system configuration, simplex/duplex or multi node? StarlingX itself will not do auto-shutdown, but it may auto-reboot if there is critical error. Most of StarlingX’s log is at /var/log folder. You could run “collect” cmd in your fail system after the issue occur, and upload the generated logfile to somewhere others could access. Then I will have a check with it. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 11:41 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear all, My STX 1.0 will be automatically shutdown and power off. Where could i check the logs about this issue? Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Sat Jun 22 12:57:02 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Sat, 22 Jun 2019 12:57:02 +0000 Subject: [Starlingx-discuss] kubernetes uprev Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC2559F61@ALA-MBD.corp.ad.wrs.com> Folks, A heads up that kubernetes will be uprev'ed from 1.13.5 to 1.15 in the coming weeks. This was discussed and agreed to at the TSC meeting on 6/13/19, see mins https://etherpad.openstack.org/p/stx-cores As we get closer to making this change we will communicate via the mailing list. Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Sat Jun 22 16:13:02 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Sat, 22 Jun 2019 16:13:02 +0000 Subject: [Starlingx-discuss] stx.2.0 milestone-3 declared Message-ID: <151EE31B9FCCA54397A757BC674650F0C153E61D@ALA-MBD.corp.ad.wrs.com> Hello all, This email announces that the stx.2.0 milestone-3 has been achieved. As discussed in the community meeting on 06/19, the community agreed to declare the milestone once we have a good sanity. The following load is green for both bare metal and virtual environment http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190621T013000Z/outputs/iso/ The following exceptions were reviewed and agreed to previously: - Containers code removal/cleanup (tasks in several stories) - Fcst: end of June - API authentication for K8s platform (requires Armada upversion) - Fcst: TBD - multiple cinder storage tiers - Fcst: Jun 21 - component rebases - Fcst: ongoing as needed until RC1 ** Note: Story board cleanup is still in progress. On behalf of the starlingx release team, I would like to take this opportunity to thank everyone who contributed to this milestone. Our next milestone in RC1 planned for August 5. Regards, Ghada PS: Here is a reminder of code merge guidelines until RC1. This was discussed and sent out previously. - Priority goes to stx.2.0 bug fixes and stx.2.0 MS-3 exceptions (above) - For code unrelated to stx.2.0 (code for deferred items to stx.3.0, new stx.3.0 features, enhancements), only passive/disabled code should be merged in master until the stx.2.0 RC1 branch is created (Aug 5). We will leave it to the judgment of the technical leads and core reviewers to determine if code is safe to merge. If you need an opinion, the release planning team is happy to help (myself, Bill and Bruce). From cindy.xie at intel.com Sun Jun 23 11:36:51 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Sun, 23 Jun 2019 11:36:51 +0000 Subject: [Starlingx-discuss] stx.2.0 milestone-3 declared In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C153E61D@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C153E61D@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FAF97D@SHSMSX104.ccr.corp.intel.com> Great to see the MS3 achieved! Cheers! - cindy -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Sunday, June 23, 2019 12:13 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx.2.0 milestone-3 declared Hello all, This email announces that the stx.2.0 milestone-3 has been achieved. As discussed in the community meeting on 06/19, the community agreed to declare the milestone once we have a good sanity. The following load is green for both bare metal and virtual environment http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190621T013000Z/outputs/iso/ The following exceptions were reviewed and agreed to previously: - Containers code removal/cleanup (tasks in several stories) - Fcst: end of June - API authentication for K8s platform (requires Armada upversion) - Fcst: TBD - multiple cinder storage tiers - Fcst: Jun 21 - component rebases - Fcst: ongoing as needed until RC1 ** Note: Story board cleanup is still in progress. On behalf of the starlingx release team, I would like to take this opportunity to thank everyone who contributed to this milestone. Our next milestone in RC1 planned for August 5. Regards, Ghada PS: Here is a reminder of code merge guidelines until RC1. This was discussed and sent out previously. - Priority goes to stx.2.0 bug fixes and stx.2.0 MS-3 exceptions (above) - For code unrelated to stx.2.0 (code for deferred items to stx.3.0, new stx.3.0 features, enhancements), only passive/disabled code should be merged in master until the stx.2.0 RC1 branch is created (Aug 5). We will leave it to the judgment of the technical leads and core reviewers to determine if code is safe to merge. If you need an opinion, the release planning team is happy to help (myself, Bill and Bruce). _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Anirudh.Gupta at hsc.com Fri Jun 21 01:00:13 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Fri, 21 Jun 2019 01:00:13 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com>, <6345119E91D5C843A93D64F498ACFA13745233C5@SHSMSX101.ccr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0C153D4F7@ALA-MBD.corp.ad.wrs.com>, Message-ID: Hi Team, Can someone please provide any pointer to resolve the issue. Regards Anirudh Gupta ________________________________ From: Anirudh Gupta Sent: Thursday, June 20, 2019 6:55:15 PM To: Zhao, Forrest; Xie, Cindy; Jones, Bruce E; Winnicki, Chris; Ildiko Vancsa; Miller, Frank; Arce Moreno, Abraham; Hernandez Gonzalez, Fernando; yong.hu at intel.com; Khalil, Ghada Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query Hi Cindy, Yong, Ghada & Team, I need to create AIO 2018.10 Simplex Bare Metal Setup. As discussed in the mail chain, I have arranged a server with DPDK NIC and tried installing the ISO downloaded from the link http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ The server configuration is as below: CPU core – 16 Hard disk – 2 Hardisk each of 930GB RAM – 64 GB DPDK supported NIC – Yes controller-0:~$ lspci | grep -i ethernet 03:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T 03:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T 06:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 06:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) When I ran the config_controller command, it throws me an error in 02/08 Step : Applying Bootstrap Manifest” EXT4-fs error (device drb0): ext4_journal_check_start:56: Detected aborted journal EXT4-fs error(dbr0): Remounting filesystem read-only The completed screenshot of the error is attached in the mail. I also tried following the below mailing list http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/003140.html There are 2 Hard-disk attached in the server each of 930 GB I had tried configured my server in RAID 0 and RAID 1, but getting the same error of running the config_controller command in the step 2. I am also attaching the complete logs of path /var/log/ for your reference. Please help me in resolving the issue. Regards Anirudh Gupta (Senior Engineer) From: Khalil, Ghada Sent: 19 June 2019 06:07 To: Anirudh Gupta ; Zhao, Forrest ; Xie, Cindy ; Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Miller, Frank ; Arce Moreno, Abraham ; Hazzim Anaya Casas? ; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query I’m not aware of a way to allow the use of ovs (w/o dpdk) in 2018.10 To use OVS, you will need to use a recent load built from master and follow the updated deployment instructions: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ The last green sanity was using ISO 20190613. You can use the symlink: latest_green_build or monitor the sanity emails sent regularly to the mailing list. Regards, Ghada From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 2:14 PM To: Zhao, Forrest; Xie, Cindy; Jones, Bruce E; Winnicki, Chris; Ildiko Vancsa; Khalil, Ghada; Miller, Frank; Arce Moreno, Abraham; Hazzim Anaya Casas?; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: StarlingX 2018.10 without DPDK query Hi Forrest & Team I agree that there is no such instruction mentioned in the document for 2018.10 AIO Simplex Mode for OVS. It's just that my system doesn't have DPDK support and I need to deploy 2018.10 release until Release 2.0 is released in August 2019 as per the plan. So Can I setup STARLINGX 2018.10 AIO SIMPLEX in my bare metal without DPDK support? Is there any flag that could be disabled or any workaround that can work for me? Looking Forward for your response. Regards Anirudh Gupta From: Zhao, Forrest Sent: Tuesday, 18 June, 7:28 PM Subject: RE: StarlingX 2018.10 without DPDK query To: Xie, Cindy, Anirudh Gupta, Jones, Bruce E, Winnicki, Chris, Ildiko Vancsa, Ghada Khalil?, Jones, Bruce E, Frank Miller, Arce Moreno, Abraham, Hazzim Anaya Casas?, Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io, starlingx-announce at lists.starlingx.io If I remember correctly, 2018.10 release only support OVS-DPDK as virtual switch. Ghada may be able to double confirm that. Also I can’t find any instruction in 2018.10 AIO simplex deployment guide https://docs.starlingx.io/deployment_guides/current/simplex.html to set the virtual switch to OVS. From: Xie, Cindy Sent: Tuesday, June 18, 2019 8:35 PM To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando >; Zhao, Forrest > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query + Forrest, do you have good answer to Gupta about using OVS without DPDK for 2018.10 release? I understand that we have the containerized OVS option without DPDK with vswitch type to “none” in stx.2.0 today. Not sure about the behavior in 2018.10 release. From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 7:22 PM To: Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2018.10 without DPDK query Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be “none” and “ovs”, using the below command system modify –vswitch_type=ovs system modify –vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in “degraded” state. Can’t I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy > Sent: 18 June 2019 15:30 To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug’19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they’ll remain unchanged? What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Fri Jun 21 03:51:23 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Fri, 21 Jun 2019 11:51:23 +0800 Subject: [Starlingx-discuss] ovs-vswitchd high CPU rate 100% Message-ID: Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex My ovs-vswitchd process is causing the high CPU usage. Any suggestions? or It is normal ? [image: image.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 104383 bytes Desc: not available URL: From Brent.Rowsell at windriver.com Fri Jun 21 15:24:57 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Fri, 21 Jun 2019 15:24:57 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com>, <6345119E91D5C843A93D64F498ACFA13745233C5@SHSMSX101.ccr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0C153D4F7@ALA-MBD.corp.ad.wrs.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC25580D1@ALA-MBD.corp.ad.wrs.com> You do not have enough memory to satisfy the default platform settings of: Numa0: 14.5G Numa1: 2G You can try and reduce the default settings. Brent From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, June 20, 2019 12:19 AM To: Khalil, Ghada ; Zhao, Forrest ; Xie, Cindy ; Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Miller, Frank ; Arce Moreno, Abraham ; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2018.10 without DPDK query Hi Team, I am trying to install StarlingX AIO Simplex 2018.10 on one of the HP Server but facing issue in Unlocking the Host [root at controller-0 ~(keystone_admin)]# system host-unlock controller-0 Rejected: Total allocated memory exceeds the total memory of controller-0 numa node 0 The below is the server's configuration: Memory - 16GB CPU cores - 24 Hard disk - 600 GB Numa nodes - 2 [root at controller-0 ~(keystone_admin)]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 44 Model name: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz Stepping: 2 CPU MHz: 1600.000 CPU max MHz: 2666.0000 CPU min MHz: 1600.0000 BogoMIPS: 5333.56 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 12288K NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm epb tpr_shadow vnmi flexpriority ept vpid dtherm ida arat I am attaching the /var/log for your reference. Can you please help me in resolving the issue. Regards Anirudh GUPTA From: Khalil, Ghada > Sent: 19 June 2019 06:07 To: Anirudh Gupta >; Zhao, Forrest >; Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Miller, Frank >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query I'm not aware of a way to allow the use of ovs (w/o dpdk) in 2018.10 To use OVS, you will need to use a recent load built from master and follow the updated deployment instructions: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ The last green sanity was using ISO 20190613. You can use the symlink: latest_green_build or monitor the sanity emails sent regularly to the mailing list. Regards, Ghada From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 2:14 PM To: Zhao, Forrest; Xie, Cindy; Jones, Bruce E; Winnicki, Chris; Ildiko Vancsa; Khalil, Ghada; Miller, Frank; Arce Moreno, Abraham; Hazzim Anaya Casas?; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: StarlingX 2018.10 without DPDK query Hi Forrest & Team I agree that there is no such instruction mentioned in the document for 2018.10 AIO Simplex Mode for OVS. It's just that my system doesn't have DPDK support and I need to deploy 2018.10 release until Release 2.0 is released in August 2019 as per the plan. So Can I setup STARLINGX 2018.10 AIO SIMPLEX in my bare metal without DPDK support? Is there any flag that could be disabled or any workaround that can work for me? Looking Forward for your response. Regards Anirudh Gupta From: Zhao, Forrest Sent: Tuesday, 18 June, 7:28 PM Subject: RE: StarlingX 2018.10 without DPDK query To: Xie, Cindy, Anirudh Gupta, Jones, Bruce E, Winnicki, Chris, Ildiko Vancsa, Ghada Khalil?, Jones, Bruce E, Frank Miller, Arce Moreno, Abraham, Hazzim Anaya Casas?, Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io, starlingx-announce at lists.starlingx.io If I remember correctly, 2018.10 release only support OVS-DPDK as virtual switch. Ghada may be able to double confirm that. Also I can't find any instruction in 2018.10 AIO simplex deployment guide https://docs.starlingx.io/deployment_guides/current/simplex.html to set the virtual switch to OVS. From: Xie, Cindy Sent: Tuesday, June 18, 2019 8:35 PM To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando >; Zhao, Forrest > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query + Forrest, do you have good answer to Gupta about using OVS without DPDK for 2018.10 release? I understand that we have the containerized OVS option without DPDK with vswitch type to "none" in stx.2.0 today. Not sure about the behavior in 2018.10 release. From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 7:22 PM To: Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2018.10 without DPDK query Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be "none" and "ovs", using the below command system modify -vswitch_type=ovs system modify -vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in "degraded" state. Can't I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy > Sent: 18 June 2019 15:30 To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug'19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they'll remain unchanged? What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Sun Jun 23 16:32:53 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Sun, 23 Jun 2019 16:32:53 +0000 Subject: [Starlingx-discuss] ovs-vswitchd high CPU rate 100% References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC255AF9D@ALA-MBD.corp.ad.wrs.com> This is expected as this is a DPDK application. Brent From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 11:51 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] ovs-vswitchd high CPU rate 100% Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex My ovs-vswitchd process is causing the high CPU usage. Any suggestions? or It is normal ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Sun Jun 23 17:09:08 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Sun, 23 Jun 2019 17:09:08 +0000 Subject: [Starlingx-discuss] ovs-vswitchd high CPU rate 100% References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC255B0CA@ALA-MBD.corp.ad.wrs.com> This is expected as this is a DPDK application. Brent From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 11:51 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] ovs-vswitchd high CPU rate 100% Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex My ovs-vswitchd process is causing the high CPU usage. Any suggestions? or It is normal ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Mon Jun 24 02:16:23 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Mon, 24 Jun 2019 10:16:23 +0800 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEF7810@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7810@SHSMSX101.ccr.corp.intel.com> Message-ID: Dear Shuicheng, Even i don't create VM , this issue still happened again. My steps didn't create VM. (Last email) Any stable STX 1.0 could download? or It is the latest version of STX 1.0. Thanks a lot. Lin, Shuicheng 於 2019年6月22日 週六 上午11:27寫道: > Hi Ezpeer, > > I checked the log, it is the same reason as previous. > > Have you tried to create VM without pci-sriov? Will the issue still occur? > > If not, then we could narrow down to focus on the pci-sriov part. > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Thursday, June 20, 2019 6:04 PM > *To:* Lin, Shuicheng > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] STX 1.0 automatically shutdown and > power off. > > > > > Dear Shuicheng, > > > Update new information > > > I reinstall STX 1.0 (2018/10). > > > > My installation Step: > > 1. Install STX 1.0 (2018/10) all-in-one simplex > > 2. Configure pci-sriov interface > > # source /etc/nova/openrc > #neutron providernet-create providernet-a --type=flat > #neutron providernet-create providernet-b --type=vlan > #neutron providernet-range-create --name providernet-b-range1 --range > 100-400 providernet-b > #system host-if-modify -c pci-sriov controller-0 enp2s0f0 -p > providernet-a -N 7 > #system host-if-modify -c data controller-0 enp2s0f1 -p providernet-b > > .....configure storage > > #system host-unlock controller-0 > > System auto reboot > > 3. After reboot to prompt, about 10-20 minutes issue occurred. > > > Log files: > https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS > > > > > > Best Regards > > > > Ezpeer Chen 於 2019年6月20日 週四 下午5:20寫道: > > Dear Shuicheng, > > > In my environment, i need to test pci-sriov feature. > > My installation Step: > > 1. Install STX 1.0 (2018/10) all-in-one simplex > > 2. Configure pci-sriov interface > > 3. Create network and VM on one vf port > > 4. Issue occurred > > > > > Best Regards > > > > > > Lin, Shuicheng 於 2019年6月20日 週四 下午4:41寫道: > > Hi Ezpeer, > > For the sm.log (/var/log/sm.log), it seems service process fail to run, > and lead to sm think system is in unhealthy state, and try to reboot system > to recover. > > It should be reboot, but not sure why it is shutdown in your environment > and also auth.log shows “systemd-logind[893]: info Power key > pressed./info Powering Off...” > > > > In the sm.log, I also see the network adapter is up/down randomly. It may > be the cause of the service process failure. > > I also find VF Ethernet is enabled in the system also. > > To isolate the issue cause, could you help try to disable the VF Ethernet > in BIOS, and use PF Ethernet only? > > Thanks. > > > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Thursday, June 20, 2019 2:27 PM > *To:* Lin, Shuicheng > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] STX 1.0 automatically shutdown and > power off. > > > > Dear Shuicheng, > > Platform: STX 1.0 ( > http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/latest_build/outputs/iso/ > ) > > > > 1. bare-metal system , MY host pc Controller-0 shutdown without reboot > > 2. provisioned system , all-in-one simplex > > 3. collet cmd log file: > > https://drive.google.com/open?id=1D7yw_IiCPrDBCn6GmPK1GDC7FAdo5Wcr > > > > Thanks for your help. > > > > Best Regards > > > > Lin, Shuicheng 於 2019年6月20日 週四 下午1:56寫道: > > Hi Ezpeer, > > Is it virtual machine or bare-metal system? > > Is it a provisioned system? And what is the system configuration, > simplex/duplex or multi node? > > StarlingX itself will not do auto-shutdown, but it may auto-reboot if > there is critical error. > > Most of StarlingX’s log is at /var/log folder. > > > > You could run “collect” cmd in your fail system after the issue occur, and > upload the generated logfile to somewhere others could access. > > Then I will have a check with it. > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Thursday, June 20, 2019 11:41 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] STX 1.0 automatically shutdown and power > off. > > > > Dear all, > > My STX 1.0 will be automatically shutdown and power off. > > Where could i check the logs about this issue? > > > Thanks a lot. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cheng1.li at intel.com Mon Jun 24 06:42:08 2019 From: cheng1.li at intel.com (Li, Cheng1) Date: Mon, 24 Jun 2019 06:42:08 +0000 Subject: [Starlingx-discuss] Docker image list Message-ID: Hello Starlingxer, As you know many docker images are pulled during starlingx deployment. It may be fast to pull all these images in America, but it's very slow in China. To speed up starlingx deployment, I have set up a private docker registry for which I sync images every night from upstream registries by cron job. I installed starlingx without using my private docker registry so that I can get the upstream docker image list. Every item in the list is synced every day. This works fine except that the docker image list changes sometimes. In this case, I would have to collect the image list by deploying without using private docker registry, which is very slow. So I wonder if it's possible to publish the docker image list file together with ISO and tarball on CENGN. I know we do sanity tests for each ISO, maybe we can run 'docker images' in sanity test to collect the docker image list? Thanks, Cheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Mon Jun 24 11:17:56 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Mon, 24 Jun 2019 11:17:56 +0000 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> References: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> Message-ID: <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> Hi Ezpeer, In addition to Tao’s point … The only time maintenance will power off a host outside of explicit administrative action is if that host’s board management controller is provisioned, the critical action for a sensor group has been changed to power cycle AND a sensor in that sensor group reports a debounced critical severity. If you are experiencing heartbeat failures that you are trying to debug you can change the heartbeat failure action to ‘degrade’ or ‘alarm’ only to avoid the recovery reboot. Not recommended, but available for debug. Ø system service-parameter-modify platform maintenance heartbeat_failure_action=degrade Ø system service-parameter-apply platform Locking a host will prevent host watchdog reboot due to quorum process failure or watchdog pet failure/timeout If you are experiencing autonomous host power off then I would look at the BMC logs for critical or fatal event reports. Eric. From: Liu, Tao [mailto:Tao.Liu at windriver.com] Sent: Friday, June 21, 2019 9:33 AM To: Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, The fault management reports fault conditions and significant events in the system and it does not reboot or power off the controller. The maintenance system takes proper actions to recover the system When necessary. I suggest you to view the active alarms and event history to see what failures might lead to reboot the controller for recovery. (could it be a configuration failure?). fm alarm-list fm event-list In addition, /var/log/mtcAgent.log provides more details on why the host is reboot or power-off. Regards, Tao From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 21, 2019 4:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to turn off fault management? Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex How could i turn off fault management which cause my system(controller-0) reboot or power-off? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jun 24 11:38:05 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 24 Jun 2019 11:38:05 +0000 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> References: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FEF7D4F@SHSMSX101.ccr.corp.intel.com> Hi Eric, Here is the collect log for the issue. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS You could find the issue reproduce step in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/005033.html It seems it is pci-sriov related issue. From the sm.log, it seems Ethernet interface is not stable, and cause several services cannot run successfully, and lead to the shutdown. Maybe you could provide some workaround suggestion for him. Thanks. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:18 PM To: Liu, Tao ; Ezpeer Chen ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, In addition to Tao’s point … The only time maintenance will power off a host outside of explicit administrative action is if that host’s board management controller is provisioned, the critical action for a sensor group has been changed to power cycle AND a sensor in that sensor group reports a debounced critical severity. If you are experiencing heartbeat failures that you are trying to debug you can change the heartbeat failure action to ‘degrade’ or ‘alarm’ only to avoid the recovery reboot. Not recommended, but available for debug. Ø system service-parameter-modify platform maintenance heartbeat_failure_action=degrade Ø system service-parameter-apply platform Locking a host will prevent host watchdog reboot due to quorum process failure or watchdog pet failure/timeout If you are experiencing autonomous host power off then I would look at the BMC logs for critical or fatal event reports. Eric. From: Liu, Tao [mailto:Tao.Liu at windriver.com] Sent: Friday, June 21, 2019 9:33 AM To: Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, The fault management reports fault conditions and significant events in the system and it does not reboot or power off the controller. The maintenance system takes proper actions to recover the system When necessary. I suggest you to view the active alarms and event history to see what failures might lead to reboot the controller for recovery. (could it be a configuration failure?). fm alarm-list fm event-list In addition, /var/log/mtcAgent.log provides more details on why the host is reboot or power-off. Regards, Tao From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 21, 2019 4:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to turn off fault management? Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex How could i turn off fault management which cause my system(controller-0) reboot or power-off? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Mon Jun 24 11:39:05 2019 From: serverascode at gmail.com (Curtis) Date: Mon, 24 Jun 2019 07:39:05 -0400 Subject: [Starlingx-discuss] Docker image list In-Reply-To: References: Message-ID: On Mon, Jun 24, 2019 at 2:45 AM Li, Cheng1 wrote: > Hello Starlingxer, > > > > As you know many docker images are pulled during starlingx deployment. It > may be fast to pull all these images in America, but it’s very slow in > China. > > To speed up starlingx deployment, I have set up a private docker registry > for which I sync images every night from upstream registries by cron job. > > I installed starlingx without using my private docker registry so that I > can get the upstream docker image list. Every item in the list is synced > every day. > > > > This works fine except that the docker image list changes sometimes. In > this case, I would have to collect the image list by deploying without > using private docker registry, which is very slow. > > So I wonder if it’s possible to publish the docker image list file > together with ISO and tarball on CENGN. I know we do sanity tests for each > ISO, maybe we can run ‘docker images’ in sanity test to collect the docker > image list? > > > I'd love to see the same thing. For the workshop we did at the last summit I would have to deploy, then note what images were deployed, then download them all. Not automatable. If we could publish a list of images per release that would help immensely. :) Thanks, Curtis > Thanks, > > Cheng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Mon Jun 24 11:40:15 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Mon, 24 Jun 2019 11:40:15 +0000 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEF7D4F@SHSMSX101.ccr.corp.intel.com> References: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D4F@SHSMSX101.ccr.corp.intel.com> Message-ID: <210898B96CA058408C55992CCAD98676C101E07A@ALA-MBD.corp.ad.wrs.com> SM does not power off a host. When you say shutdown you mean power off correct ? From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, June 24, 2019 7:38 AM To: MacDonald, Eric; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? Importance: High Hi Eric, Here is the collect log for the issue. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS You could find the issue reproduce step in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/005033.html It seems it is pci-sriov related issue. From the sm.log, it seems Ethernet interface is not stable, and cause several services cannot run successfully, and lead to the shutdown. Maybe you could provide some workaround suggestion for him. Thanks. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:18 PM To: Liu, Tao ; Ezpeer Chen ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, In addition to Tao’s point … The only time maintenance will power off a host outside of explicit administrative action is if that host’s board management controller is provisioned, the critical action for a sensor group has been changed to power cycle AND a sensor in that sensor group reports a debounced critical severity. If you are experiencing heartbeat failures that you are trying to debug you can change the heartbeat failure action to ‘degrade’ or ‘alarm’ only to avoid the recovery reboot. Not recommended, but available for debug. > system service-parameter-modify platform maintenance heartbeat_failure_action=degrade > system service-parameter-apply platform Locking a host will prevent host watchdog reboot due to quorum process failure or watchdog pet failure/timeout If you are experiencing autonomous host power off then I would look at the BMC logs for critical or fatal event reports. Eric. From: Liu, Tao [mailto:Tao.Liu at windriver.com] Sent: Friday, June 21, 2019 9:33 AM To: Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, The fault management reports fault conditions and significant events in the system and it does not reboot or power off the controller. The maintenance system takes proper actions to recover the system When necessary. I suggest you to view the active alarms and event history to see what failures might lead to reboot the controller for recovery. (could it be a configuration failure?). fm alarm-list fm event-list In addition, /var/log/mtcAgent.log provides more details on why the host is reboot or power-off. Regards, Tao From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 21, 2019 4:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to turn off fault management? Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex How could i turn off fault management which cause my system(controller-0) reboot or power-off? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jun 24 11:40:31 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 24 Jun 2019 11:40:31 +0000 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7810@SHSMSX101.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FEF7D6E@SHSMSX101.ccr.corp.intel.com> Hi Ezpeer, It seems there is only 1 ISO in cengn for STX 1.0 release. Here it is: http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/ You could check whether your ISO is the same as this or not by looking at the build date in “/etc/build.info”. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Monday, June 24, 2019 10:16 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, Even i don't create VM , this issue still happened again. My steps didn't create VM. (Last email) Any stable STX 1.0 could download? or It is the latest version of STX 1.0. Thanks a lot. Lin, Shuicheng > 於 2019年6月22日 週六 上午11:27寫道: Hi Ezpeer, I checked the log, it is the same reason as previous. Have you tried to create VM without pci-sriov? Will the issue still occur? If not, then we could narrow down to focus on the pci-sriov part. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 6:04 PM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, Update new information I reinstall STX 1.0 (2018/10). My installation Step: 1. Install STX 1.0 (2018/10) all-in-one simplex 2. Configure pci-sriov interface # source /etc/nova/openrc #neutron providernet-create providernet-a --type=flat #neutron providernet-create providernet-b --type=vlan #neutron providernet-range-create --name providernet-b-range1 --range 100-400 providernet-b #system host-if-modify -c pci-sriov controller-0 enp2s0f0 -p providernet-a -N 7 #system host-if-modify -c data controller-0 enp2s0f1 -p providernet-b .....configure storage #system host-unlock controller-0 System auto reboot 3. After reboot to prompt, about 10-20 minutes issue occurred. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS Best Regards Ezpeer Chen > 於 2019年6月20日 週四 下午5:20寫道: Dear Shuicheng, In my environment, i need to test pci-sriov feature. My installation Step: 1. Install STX 1.0 (2018/10) all-in-one simplex 2. Configure pci-sriov interface 3. Create network and VM on one vf port 4. Issue occurred Best Regards Lin, Shuicheng > 於 2019年6月20日 週四 下午4:41寫道: Hi Ezpeer, For the sm.log (/var/log/sm.log), it seems service process fail to run, and lead to sm think system is in unhealthy state, and try to reboot system to recover. It should be reboot, but not sure why it is shutdown in your environment and also auth.log shows “systemd-logind[893]: info Power key pressed./info Powering Off...” In the sm.log, I also see the network adapter is up/down randomly. It may be the cause of the service process failure. I also find VF Ethernet is enabled in the system also. To isolate the issue cause, could you help try to disable the VF Ethernet in BIOS, and use PF Ethernet only? Thanks. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 2:27 PM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, Platform: STX 1.0 ( http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/latest_build/outputs/iso/) 1. bare-metal system , MY host pc Controller-0 shutdown without reboot 2. provisioned system , all-in-one simplex 3. collet cmd log file: https://drive.google.com/open?id=1D7yw_IiCPrDBCn6GmPK1GDC7FAdo5Wcr Thanks for your help. Best Regards Lin, Shuicheng > 於 2019年6月20日 週四 下午1:56寫道: Hi Ezpeer, Is it virtual machine or bare-metal system? Is it a provisioned system? And what is the system configuration, simplex/duplex or multi node? StarlingX itself will not do auto-shutdown, but it may auto-reboot if there is critical error. Most of StarlingX’s log is at /var/log folder. You could run “collect” cmd in your fail system after the issue occur, and upload the generated logfile to somewhere others could access. Then I will have a check with it. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 11:41 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear all, My STX 1.0 will be automatically shutdown and power off. Where could i check the logs about this issue? Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jun 24 11:47:43 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 24 Jun 2019 11:47:43 +0000 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: <210898B96CA058408C55992CCAD98676C101E07A@ALA-MBD.corp.ad.wrs.com> References: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D4F@SHSMSX101.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C101E07A@ALA-MBD.corp.ad.wrs.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FEF7D94@SHSMSX101.ccr.corp.intel.com> Hi Eric, I guess you mean sm should reboot a host, not shutdown. Is it correct? But per Ezpeer’s description, it seems like shutdown. BTW, it is with STX 1.0, from 2010.10 release branch. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:40 PM To: Lin, Shuicheng ; Liu, Tao ; Ezpeer Chen ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? SM does not power off a host. When you say shutdown you mean power off correct ? From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, June 24, 2019 7:38 AM To: MacDonald, Eric; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? Importance: High Hi Eric, Here is the collect log for the issue. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS You could find the issue reproduce step in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/005033.html It seems it is pci-sriov related issue. From the sm.log, it seems Ethernet interface is not stable, and cause several services cannot run successfully, and lead to the shutdown. Maybe you could provide some workaround suggestion for him. Thanks. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:18 PM To: Liu, Tao >; Ezpeer Chen >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, In addition to Tao’s point … The only time maintenance will power off a host outside of explicit administrative action is if that host’s board management controller is provisioned, the critical action for a sensor group has been changed to power cycle AND a sensor in that sensor group reports a debounced critical severity. If you are experiencing heartbeat failures that you are trying to debug you can change the heartbeat failure action to ‘degrade’ or ‘alarm’ only to avoid the recovery reboot. Not recommended, but available for debug. > system service-parameter-modify platform maintenance heartbeat_failure_action=degrade > system service-parameter-apply platform Locking a host will prevent host watchdog reboot due to quorum process failure or watchdog pet failure/timeout If you are experiencing autonomous host power off then I would look at the BMC logs for critical or fatal event reports. Eric. From: Liu, Tao [mailto:Tao.Liu at windriver.com] Sent: Friday, June 21, 2019 9:33 AM To: Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, The fault management reports fault conditions and significant events in the system and it does not reboot or power off the controller. The maintenance system takes proper actions to recover the system When necessary. I suggest you to view the active alarms and event history to see what failures might lead to reboot the controller for recovery. (could it be a configuration failure?). fm alarm-list fm event-list In addition, /var/log/mtcAgent.log provides more details on why the host is reboot or power-off. Regards, Tao From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 21, 2019 4:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to turn off fault management? Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex How could i turn off fault management which cause my system(controller-0) reboot or power-off? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Mon Jun 24 12:15:37 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Mon, 24 Jun 2019 12:15:37 +0000 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEF7D94@SHSMSX101.ccr.corp.intel.com> References: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D4F@SHSMSX101.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C101E07A@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D94@SHSMSX101.ccr.corp.intel.com> Message-ID: <210898B96CA058408C55992CCAD98676C101E0E1@ALA-MBD.corp.ad.wrs.com> Yes, SM can request mtce to reboot a controller and there will be an explicit mtce log for that (below) controller-? is being force failed by SM From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, June 24, 2019 7:48 AM To: MacDonald, Eric; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? Importance: High Hi Eric, I guess you mean sm should reboot a host, not shutdown. Is it correct? But per Ezpeer’s description, it seems like shutdown. BTW, it is with STX 1.0, from 2010.10 release branch. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:40 PM To: Lin, Shuicheng ; Liu, Tao ; Ezpeer Chen ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? SM does not power off a host. When you say shutdown you mean power off correct ? From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, June 24, 2019 7:38 AM To: MacDonald, Eric; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? Importance: High Hi Eric, Here is the collect log for the issue. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS You could find the issue reproduce step in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/005033.html It seems it is pci-sriov related issue. From the sm.log, it seems Ethernet interface is not stable, and cause several services cannot run successfully, and lead to the shutdown. Maybe you could provide some workaround suggestion for him. Thanks. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:18 PM To: Liu, Tao >; Ezpeer Chen >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, In addition to Tao’s point … The only time maintenance will power off a host outside of explicit administrative action is if that host’s board management controller is provisioned, the critical action for a sensor group has been changed to power cycle AND a sensor in that sensor group reports a debounced critical severity. If you are experiencing heartbeat failures that you are trying to debug you can change the heartbeat failure action to ‘degrade’ or ‘alarm’ only to avoid the recovery reboot. Not recommended, but available for debug. > system service-parameter-modify platform maintenance heartbeat_failure_action=degrade > system service-parameter-apply platform Locking a host will prevent host watchdog reboot due to quorum process failure or watchdog pet failure/timeout If you are experiencing autonomous host power off then I would look at the BMC logs for critical or fatal event reports. Eric. From: Liu, Tao [mailto:Tao.Liu at windriver.com] Sent: Friday, June 21, 2019 9:33 AM To: Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, The fault management reports fault conditions and significant events in the system and it does not reboot or power off the controller. The maintenance system takes proper actions to recover the system When necessary. I suggest you to view the active alarms and event history to see what failures might lead to reboot the controller for recovery. (could it be a configuration failure?). fm alarm-list fm event-list In addition, /var/log/mtcAgent.log provides more details on why the host is reboot or power-off. Regards, Tao From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 21, 2019 4:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to turn off fault management? Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex How could i turn off fault management which cause my system(controller-0) reboot or power-off? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Jun 24 13:11:51 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 24 Jun 2019 13:11:51 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Weekly Meeting Message-ID: Team Agenda for June 24 meeting: 1. Higher priority LPs: 10 high priority bugs & 37 medium priority bugs Focus at meeting to be on status for the high priority bugs 2. Remaining SB status: 2004760 Containerize the ironic service [Mingyuan Qi] 2003909 HELM Chart Override Generation [Gerry Kopec - 2 nova tasks + Daniel Badea 1 ceph tiering task] 2002843 K8s Platform Support [Jerry Sun - 1 task for k8s API authentication; forecast: June 28] 2004764 Removal of bare metal Openstack related code & 2005358 stx.config sysinv container cleanup [Al Bailey] 2005860 Upversion container components (armada, docker, kubernetes) [Jerry/Alex/Al] 3. Other topics? Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2941 bytes Desc: not available URL: From austin.sun at intel.com Mon Jun 24 13:20:06 2019 From: austin.sun at intel.com (Sun, Austin) Date: Mon, 24 Jun 2019 13:20:06 +0000 Subject: [Starlingx-discuss] Docker image list In-Reply-To: References: Message-ID: Echo this request. Publishing a list of images per release is really helping a lot of developers especially who is working behind proxy. Thanks. BR Austin Sun. From: Curtis [mailto:serverascode at gmail.com] Sent: Monday, June 24, 2019 7:39 PM To: Li, Cheng1 Cc: starlingx-discuss at lists.starlingx.io; Xu, Chenjie Subject: Re: [Starlingx-discuss] Docker image list On Mon, Jun 24, 2019 at 2:45 AM Li, Cheng1 > wrote: Hello Starlingxer, As you know many docker images are pulled during starlingx deployment. It may be fast to pull all these images in America, but it’s very slow in China. To speed up starlingx deployment, I have set up a private docker registry for which I sync images every night from upstream registries by cron job. I installed starlingx without using my private docker registry so that I can get the upstream docker image list. Every item in the list is synced every day. This works fine except that the docker image list changes sometimes. In this case, I would have to collect the image list by deploying without using private docker registry, which is very slow. So I wonder if it’s possible to publish the docker image list file together with ISO and tarball on CENGN. I know we do sanity tests for each ISO, maybe we can run ‘docker images’ in sanity test to collect the docker image list? I'd love to see the same thing. For the workshop we did at the last summit I would have to deploy, then note what images were deployed, then download them all. Not automatable. If we could publish a list of images per release that would help immensely. :) Thanks, Curtis Thanks, Cheng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Mon Jun 24 13:55:44 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Mon, 24 Jun 2019 13:55:44 +0000 Subject: [Starlingx-discuss] Docker image list In-Reply-To: References: Message-ID: <4A1AD93A-CF1E-407C-8BB5-168B64EB7911@intel.com> In Mexico we have the same issue. To workaround I created this tool[0] to get the images from the chart tarball and thus be able to populate internal registries. Although this doesn’t work for images required during the ansible playbook run. For example, this is the output of today’s chart[1]. I hope this can help in the meanwhile an official list is provided. -[0] https://gitlab.com/erichcm/stx-charts -[1] docker.io/starlingx/stx-fm-rest-api:master-centos-stable-latest docker.io/openstackhelm/magnum:ocata quay.io/attcomdev/ubuntu-source-gnocchi-metricd:3.0.3 gcr.io/google_containers/defaultbackend:1.0 quay.io/attcomdev/ubuntu-source-gnocchi-api:3.0.3 docker.io/openstackhelm/libvirt:ubuntu-xenial-1.3.1-1ubuntu10.24 docker.io/openstackhelm/mariadb:10.2.18 docker.io/prom/memcached-exporter:v0.4.1 docker.io/kolla/ubuntu-source-aodh-evaluator:ocata docker.io/openstackhelm/heat:newton quay.io/attcomdev/ubuntu-source-gnocchi-statsd:3.0.3 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 docker.io/starlingx/stx-mariadb:master-centos-stable-latest docker.io/kolla/ubuntu-source-ceilometer-api:ocata docker.io/starlingx/stx-nova-api-proxy:master-centos-stable-latest docker.io/openstackhelm/horizon:ocata docker.io/port/ceph-config-helper:v1.10.3 docker.io/openstackhelm/heat:ocata-ubuntu_xenial docker.io/openstackhelm/openvswitch:v2.8.1 docker.io/osixia/keepalived:1.4.5 docker.io/mariadb:10.2.13 docker.io/kolla/ubuntu-source-nova-novncproxy:ocata docker.io/postgres:9.5 docker.io/kolla/ubuntu-source-panko-api:ocata quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 docker.io/openstackhelm/ceph-daemon:latest docker.io/kolla/ubuntu-source-ceilometer-central:ocata docker.io/openstackhelm/ironic:ocata docker.io/kolla/ubuntu-source-aodh-notifier:ocata docker.io/memcached:1.5.5 docker.io/nginx:1.13.3 docker.io/mongo:3.4.9-jessie docker.io/prom/mysqld-exporter:v0.10.0 docker.io/openstackhelm/neutron:ocata-sriov-1804 docker.io/docker:17.07.0 docker.io/kolla/ubuntu-source-ceilometer-compute:ocata docker.io/openstackhelm/cinder:ocata docker.io/xrally/xrally-openstack:1.3.0 docker.io/kolla/ubuntu-source-panko-base:ocata docker.io/rabbitmq:3.7-management docker.io/openstackhelm/barbican:ocata docker.io/openstackhelm/heat:ocata docker.io/starlingx/stx-heat:master-centos-stable-latest docker.io/openstackhelm/glance:ocata docker.io/kolla/ubuntu-source-ceilometer-notification:ocata docker.io/openstackhelm/keystone:ocata docker.io/rabbitmq:3.7.13 docker.io/kbudde/rabbitmq-exporter:v0.21.0 docker.io/openstackhelm/placement:ocata-ubuntu_xenial quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 docker.io/openstackhelm/neutron:ocata docker.io/openstackhelm/heat:pike docker.io/starlingx/stx-keystone-api-proxy:master-centos-stable-latest docker.io/kolla/ubuntu-source-aodh-api:ocata docker.io/kolla/ubuntu-source-aodh-listener:ocata docker.io/openstackhelm/nova:ocata docker.io/kolla/ubuntu-source-nova-spicehtml5proxy:ocata quay.io/attcomdev/ubuntu-source-gnocchi-base:3.0.3 docker.io/kolla/ubuntu-source-nova-compute-ironic:ocata docker.io/kolla/ubuntu-source-ceilometer-base:ocata docker.io/kolla/ubuntu-source-ceilometer-collector:ocata docker.io/kolla/ubuntu-source-aodh-base:ocata docker.io/rabbitmq:3.7.13-management From: "Sun, Austin" Date: Monday, June 24, 2019 at 8:21 AM To: Curtis , "Li, Cheng1" Cc: "starlingx-discuss at lists.starlingx.io" , "Xu, Chenjie" Subject: Re: [Starlingx-discuss] Docker image list Echo this request. Publishing a list of images per release is really helping a lot of developers especially who is working behind proxy. Thanks. BR Austin Sun. From: Curtis [mailto:serverascode at gmail.com] Sent: Monday, June 24, 2019 7:39 PM To: Li, Cheng1 Cc: starlingx-discuss at lists.starlingx.io; Xu, Chenjie Subject: Re: [Starlingx-discuss] Docker image list On Mon, Jun 24, 2019 at 2:45 AM Li, Cheng1 > wrote: Hello Starlingxer, As you know many docker images are pulled during starlingx deployment. It may be fast to pull all these images in America, but it’s very slow in China. To speed up starlingx deployment, I have set up a private docker registry for which I sync images every night from upstream registries by cron job. I installed starlingx without using my private docker registry so that I can get the upstream docker image list. Every item in the list is synced every day. This works fine except that the docker image list changes sometimes. In this case, I would have to collect the image list by deploying without using private docker registry, which is very slow. So I wonder if it’s possible to publish the docker image list file together with ISO and tarball on CENGN. I know we do sanity tests for each ISO, maybe we can run ‘docker images’ in sanity test to collect the docker image list? I'd love to see the same thing. For the workshop we did at the last summit I would have to deploy, then note what images were deployed, then download them all. Not automatable. If we could publish a list of images per release that would help immensely. :) Thanks, Curtis Thanks, Cheng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcela.a.rosales.jimenez at intel.com Mon Jun 24 14:32:25 2019 From: marcela.a.rosales.jimenez at intel.com (Rosales Jimenez, Marcela A) Date: Mon, 24 Jun 2019 14:32:25 +0000 Subject: [Starlingx-discuss] [Multi-OS ] Minutes 6/24/19 Message-ID: <3A6B4605-986C-446A-B18B-A002EE0EA897@intel.com> Hi team, here are my notes about today’s mutios meeting Multi-OS team meeting Summary of the meeting: 6/24/19 * Opens * Saul has pending reviews for openSUSE specfiles * openSUSE flock services packaging update * Progress is 46 out of 59 planned packages for this first stage. * When installing openSUSE packages, 10 packages couldn’t install correctly because their runtime dependencies packages names are different from CentOS. 9 are fixed in StarlingX OBS, 1 is still pending. * If it is found a missing dependency, it is encouraged to add it to CentOS specfile too, however it is required to build an image and do an installation for testing before sending the review to official repositories. Thanks! Marcela -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Mon Jun 24 14:50:09 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 24 Jun 2019 09:50:09 -0500 Subject: [Starlingx-discuss] Docker image list In-Reply-To: <4A1AD93A-CF1E-407C-8BB5-168B64EB7911@intel.com> References: <4A1AD93A-CF1E-407C-8BB5-168B64EB7911@intel.com> Message-ID: On Mon, Jun 24, 2019 at 8:59 AM Cordoba Malibran, Erich wrote: > docker.io/openstackhelm/magnum:ocata > docker.io/kolla/ubuntu-source-aodh-evaluator:ocata > docker.io/openstackhelm/heat:newton > docker.io/kolla/ubuntu-source-ceilometer-api:ocata > docker.io/openstackhelm/horizon:ocata [...] Why are these showing up with 'ocata' and 'newton' in the names/tags? Presumably we are not using Ocata or Newton releases. Is this an artifact of how OpenStack-Helm works? dt -- Dean Troyer dtroyer at gmail.com From scott.little at windriver.com Mon Jun 24 17:11:45 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 24 Jun 2019 13:11:45 -0400 Subject: [Starlingx-discuss] [build] mirror-check.sh to verify updates in upstream. In-Reply-To: References: Message-ID: <0988b29b-5bc7-bae0-2e04-1b2f83d440fe@windriver.com> I've added a job to CENGN to run the mirror-check.sh script.  The report is published here... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/reports/mirror-check-failures.log ...unless folks can propose a better place.  I haven't tried to trigger anything beyond that, e.g. perhaps an email to starlingx-discuss, but we can discuss at the next meeting. Scott On 2019-06-20 4:42 p.m., Cordoba Malibran, Erich wrote: > Hi, > > According to our today's build meeting I just want to share this > script[0]. What it does is to go through the rpms defined in the .lst > files and then, using repoquery, verify is there's a new version of a > package available in upstream (centos) servers. > > As it might be of the interest of the community to automate this script > (in cengn or other server) I would like to share as well this Gitlab CI > job[1] as an example on how I have setup this script. Here's also how > the output looks like[2]. > > Currently, the script reports the packages detailed below. The updating > is being tracked by this bug[3] > > I hope this can be interesting to someone. > > -Erich > > - [0] https://opendev.org/starlingx/tools/src/branch/master/centos-mirror-tools/mirror-check.sh > - [1] https://gitlab.com/erichcm/stx-mirror-check > - [2] https://gitlab.com/erichcm/stx-mirror-check/-/jobs/236723311 > - [3] https://bugs.launchpad.net/starlingx/+bug/1817351 > > > Package lighttpd-1.4.52-1.el7.src not found, available lighttpd-1.4.54-1.el7.src > Package perl-generators-1.08-6.el7.noarch not found, available perl-generators-1.08-7.el7.noarch > Package pyflakes-1.3.0-2.el7.noarch not found, available pyflakes-0.9.2-1.el7.noarch > Package python2-certifi-2018.10.15-1.el7.noarch not found, available python2-certifi-2018.10.15-5.el7.noarch > Package python2-ddt-1.1.3-1.el7.noarch not found, available python2-ddt-1.2.0-2.el7.noarch > Package python2-iso8601-0.1.11-7.el7.noarch not found, available python2-iso8601-0.1.11-8.el7.noarch > Package python2-jsonschema-2.5.1-3.el7.noarch not found, available python2-jsonschema-2.6.0-2.el7.noarch > Package python2-mccabe-0.6.1-6.el7.noarch not found, available python2-mccabe-0.6.1-7.el7.noarch > Package python2-mimeparse-1.6.0-4.el7.noarch not found, available python2-mimeparse-1.6.0-5.el7.noarch > Package python2-olefile-0.46-1.el7.noarch not found, available python2-olefile-0.46-2.el7.noarch > Package python2-pika-0.10.0-9.el7.noarch not found, available python2-pika-0.10.0-10.el7.noarch > Package python2-PyMySQL-0.9.2-1.el7.noarch not found, available python2-PyMySQL-0.9.2-2.el7.noarch > Package python2-pyngus-2.2.4-1.el7.noarch not found, available python2-pyngus-2.3.0-1.el7.noarch > Package python2-rpm-macros-3-22.el7.noarch not found, available python2-rpm-macros-3-24.el7.noarch > Package python2-sphinx_rtd_theme-0.2.4-2.el7.0.noarch not found, available python2-sphinx_rtd_theme-0.2.4-3.el7.noarch > Package python2-whoosh-2.7.4-3.el7.noarch not found, available python2-whoosh-2.7.4-5.el7.noarch > Package python-contextlib2-0.5.1-2.el7.noarch not found, available python-contextlib2-0.5.1-3.el7.noarch > Package python-rpm-macros-3-22.el7.noarch not found, available python-rpm-macros-3-24.el7.noarch > Package python-srpm-macros-3-22.el7.noarch not found, available python-srpm-macros-3-24.el7.noarch > Package libcmocka-1.1.3-1.el7.x86_64 not found, available libcmocka-1.1.5-1.el7.x86_64 > Package libcmocka-devel-1.1.3-1.el7.x86_64 not found, available libcmocka-devel-1.1.5-1.el7.x86_64 > Package libzstd-1.3.8-1.el7.x86_64 not found, available libzstd-1.4.0-1.el7.x86_64 > Package openjpeg2-2.3.0-6.el7.x86_64 not found, available openjpeg2-2.3.1-1.el7.x86_64 > Package python2-qpid-proton-0.24.0-2.el7.x86_64 not found, available python2-qpid-proton-0.28.0-1.el7.x86_64 > Package python2-simplejson-3.10.0-1.el7.x86_64 not found, available python2-simplejson-3.10.0-7.el7.x86_64 > Package qpid-proton-c-0.24.0-2.el7.x86_64 not found, available qpid-proton-c-0.28.0-1.el7.x86_64 > Package python2-pysocks-1.6.8-5.el7.noarch not found, available python2-pysocks-1.6.8-6.el7.noarch > Package python2-scapy-2.4.0-2.el7.noarch not found, available python2-scapy-2.4.0-3.el7.noarch > Package collectd-5.8.0-4.el7.x86_64 not found, available collectd-5.8.1-4.el7.x86_64 > Package containernetworking-cni-0.5.1-1.el7.x86_64 not found, available > Package cppcheck-1.84-1.el7.x86_64 not found, available cppcheck-1.87-1.el7.x86_64 > Package ntfs-3g-2017.3.23-6.el7.x86_64 not found, available ntfs-3g-2017.3.23-11.el7.x86_64 > Package ntfs-3g-devel-2017.3.23-6.el7.x86_64 not found, available ntfs-3g-devel-2017.3.23-11.el7.x86_64 > Package ntfsprogs-2017.3.23-6.el7.x86_64 not found, available ntfsprogs-2017.3.23-11.el7.x86_64 > Package python2-msgpack-0.5.6-4.el7.x86_64 not found, available python2-msgpack-0.6.1-2.el7.x86_64 > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Mon Jun 24 17:53:23 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Mon, 24 Jun 2019 17:53:23 +0000 Subject: [Starlingx-discuss] Meeting: DevStack today and going forward in StarlingX Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A80C13@ALA-MBD.corp.ad.wrs.com> Hi all - read on if you'd like to learn more about DevStack. One of the key outcomes of the Denver PTG was an agreement to build a culture of test that help us to maintain and increase the quality of our code base. An important part of this is the functional test layer, which includes DevStack. Dean will provide an overview of DevStack for those that would like to know more about this tool - if you've got specific questions, please feel free to send them ahead of time. The meeting will be tomorrow, June 25, at 1900 UTC (see [0] for the start time in various time zones). We will use the usual Zoom bridge [1]. Bill... [0] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190625T1900 [1] https://zoom.us/j/342730236 From Bin.Qian at windriver.com Mon Jun 24 18:23:51 2019 From: Bin.Qian at windriver.com (Qian, Bin) Date: Mon, 24 Jun 2019 18:23:51 +0000 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: <210898B96CA058408C55992CCAD98676C101E0E1@ALA-MBD.corp.ad.wrs.com> References: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D4F@SHSMSX101.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C101E07A@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D94@SHSMSX101.ccr.corp.intel.com>, <210898B96CA058408C55992CCAD98676C101E0E1@ALA-MBD.corp.ad.wrs.com> Message-ID: SM only requires mtce to reboot a failed controller during a failover, i.e, in a duplex environment, the survivor controller requires mtce to reboot the failed controller. SM won't require reboot when there is only 1 controller available (e.g an aio simplex). Bin ________________________________ From: MacDonald, Eric [Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 5:15 AM To: Lin, Shuicheng; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Yes, SM can request mtce to reboot a controller and there will be an explicit mtce log for that (below) controller-? is being force failed by SM From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, June 24, 2019 7:48 AM To: MacDonald, Eric; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? Importance: High Hi Eric, I guess you mean sm should reboot a host, not shutdown. Is it correct? But per Ezpeer’s description, it seems like shutdown. BTW, it is with STX 1.0, from 2010.10 release branch. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:40 PM To: Lin, Shuicheng ; Liu, Tao ; Ezpeer Chen ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? SM does not power off a host. When you say shutdown you mean power off correct ? From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, June 24, 2019 7:38 AM To: MacDonald, Eric; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? Importance: High Hi Eric, Here is the collect log for the issue. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS You could find the issue reproduce step in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/005033.html It seems it is pci-sriov related issue. From the sm.log, it seems Ethernet interface is not stable, and cause several services cannot run successfully, and lead to the shutdown. Maybe you could provide some workaround suggestion for him. Thanks. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:18 PM To: Liu, Tao >; Ezpeer Chen >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, In addition to Tao’s point … The only time maintenance will power off a host outside of explicit administrative action is if that host’s board management controller is provisioned, the critical action for a sensor group has been changed to power cycle AND a sensor in that sensor group reports a debounced critical severity. If you are experiencing heartbeat failures that you are trying to debug you can change the heartbeat failure action to ‘degrade’ or ‘alarm’ only to avoid the recovery reboot. Not recommended, but available for debug. > system service-parameter-modify platform maintenance heartbeat_failure_action=degrade > system service-parameter-apply platform Locking a host will prevent host watchdog reboot due to quorum process failure or watchdog pet failure/timeout If you are experiencing autonomous host power off then I would look at the BMC logs for critical or fatal event reports. Eric. From: Liu, Tao [mailto:Tao.Liu at windriver.com] Sent: Friday, June 21, 2019 9:33 AM To: Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, The fault management reports fault conditions and significant events in the system and it does not reboot or power off the controller. The maintenance system takes proper actions to recover the system When necessary. I suggest you to view the active alarms and event history to see what failures might lead to reboot the controller for recovery. (could it be a configuration failure?). fm alarm-list fm event-list In addition, /var/log/mtcAgent.log provides more details on why the host is reboot or power-off. Regards, Tao From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 21, 2019 4:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to turn off fault management? Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex How could i turn off fault management which cause my system(controller-0) reboot or power-off? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Sun Jun 23 17:23:58 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Sun, 23 Jun 2019 17:23:58 +0000 Subject: [Starlingx-discuss] StarlingX 2018.10 without DPDK query In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EC255B040@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FA6CEF@SHSMSX104.ccr.corp.intel.com>, <6345119E91D5C843A93D64F498ACFA13745233C5@SHSMSX101.ccr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0C153D4F7@ALA-MBD.corp.ad.wrs.com>, , <2588653EBDFFA34B982FAF00F1B4844EC255B040@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Brent, Thanks for looking into the issue. Yes these logs are with RAID 0. But same was the behaviour with RAID 1 Thanks Anirudh Gupta From: Rowsell, Brent Sent: Sunday, 23 June, 10:36 PM Subject: RE: [Starlingx-discuss] StarlingX 2018.10without DPDK query To: Anirudh Gupta, Zhao, Forrest, Xie, Cindy, Jones, Bruce E, Winnicki, Chris, Ildiko Vancsa, Miller, Frank, Arce Moreno, Abraham, Hernandez Gonzalez, Fernando, yong.hu at intel.com, Khalil, Ghada Cc: starlingx-discuss at lists.starlingx.io, starlingx-announce at lists.starlingx.io In the kern logs I see I/O errors. This suggests either a h/w issue (disk or controller) or a device driver issue. The logs provided are with RAID0 ? Brent From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, June 20, 2019 9:00 PM To: Zhao, Forrest ; Xie, Cindy ; Jones, Bruce E ; Winnicki, Chris ; Ildiko Vancsa ; Miller, Frank ; Arce Moreno, Abraham ; Hernandez Gonzalez, Fernando ; yong.hu at intel.com; Khalil, Ghada Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2018.10 without DPDK query Hi Team, Can someone please provide any pointer to resolve the issue. Regards Anirudh Gupta From: Anirudh Gupta Sent: Thursday, June 20, 2019 6:55:15 PM To: Zhao, Forrest; Xie, Cindy; Jones, Bruce E; Winnicki, Chris; Ildiko Vancsa; Miller, Frank; Arce Moreno, Abraham; Hernandez Gonzalez, Fernando; yong.hu at intel.com; Khalil, Ghada Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query Hi Cindy, Yong, Ghada & Team, I need to create AIO 2018.10 Simplex Bare Metal Setup. As discussed in the mail chain, I have arranged a server with DPDK NIC and tried installing the ISO downloaded from the link http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ The server configuration is as below: CPU core – 16 Hard disk – 2 Hardisk each of 930GB RAM – 64 GB DPDK supported NIC – Yes controller-0:~$ lspci | grep -i ethernet 03:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T 03:00.1 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T 06:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) 06:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) When I ran the config_controller command, it throws me an error in 02/08 Step : Applying Bootstrap Manifest” EXT4-fs error (device drb0): ext4_journal_check_start:56: Detected aborted journal EXT4-fs error(dbr0): Remounting filesystem read-only The completed screenshot of the error is attached in the mail. I also tried following the below mailing list http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/003140.html There are 2 Hard-disk attached in the server each of 930 GB I had tried configured my server in RAID 0 and RAID 1, but getting the same error of running the config_controller command in the step 2. I am also attaching the complete logs of path /var/log/ for your reference. Please help me in resolving the issue. Regards Anirudh Gupta (Senior Engineer) From: Khalil, Ghada > Sent: 19 June 2019 06:07 To: Anirudh Gupta >; Zhao, Forrest >; Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Miller, Frank >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query I’m not aware of a way to allow the use of ovs (w/o dpdk) in 2018.10 To use OVS, you will need to use a recent load built from master and follow the updated deployment instructions: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ The last green sanity was using ISO 20190613. You can use the symlink: latest_green_build or monitor the sanity emails sent regularly to the mailing list. Regards, Ghada From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 2:14 PM To: Zhao, Forrest; Xie, Cindy; Jones, Bruce E; Winnicki, Chris; Ildiko Vancsa; Khalil, Ghada; Miller, Frank; Arce Moreno, Abraham; Hazzim Anaya Casas?; Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: StarlingX 2018.10 without DPDK query Hi Forrest & Team I agree that there is no such instruction mentioned in the document for 2018.10 AIO Simplex Mode for OVS. It's just that my system doesn't have DPDK support and I need to deploy 2018.10 release until Release 2.0 is released in August 2019 as per the plan. So Can I setup STARLINGX 2018.10 AIO SIMPLEX in my bare metal without DPDK support? Is there any flag that could be disabled or any workaround that can work for me? Looking Forward for your response. Regards Anirudh Gupta From: Zhao, Forrest Sent: Tuesday, 18 June, 7:28 PM Subject: RE: StarlingX 2018.10 without DPDK query To: Xie, Cindy, Anirudh Gupta, Jones, Bruce E, Winnicki, Chris, Ildiko Vancsa, Ghada Khalil?, Jones, Bruce E, Frank Miller, Arce Moreno, Abraham, Hazzim Anaya Casas?, Hernandez Gonzalez, Fernando Cc: starlingx-discuss at lists.starlingx.io, starlingx-announce at lists.starlingx.io If I remember correctly, 2018.10 release only support OVS-DPDK as virtual switch. Ghada may be able to double confirm that. Also I can’t find any instruction in 2018.10 AIO simplex deployment guide https://docs.starlingx.io/deployment_guides/current/simplex.html to set the virtual switch to OVS. From: Xie, Cindy Sent: Tuesday, June 18, 2019 8:35 PM To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando >; Zhao, Forrest > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2018.10 without DPDK query + Forrest, do you have good answer to Gupta about using OVS without DPDK for 2018.10 release? I understand that we have the containerized OVS option without DPDK with vswitch type to “none” in stx.2.0 today. Not sure about the behavior in 2018.10 release. From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 7:22 PM To: Xie, Cindy >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2018.10 without DPDK query Note :- Subject Changed Hi Cindy, Thanks for the update. I want to setup StarlingX simplex 2018.10 setup, but I don't have DPDK support on my machine as a result of which the compute is in degraded state. I can see error in ovs-vswitch logs. Error Message: error: "Error attaching device '0000:03:00.0' to DPDK" Can you please suggest an alternative to this? I have tried setting the vswitch type to be “none” and “ovs”, using the below command system modify –vswitch_type=ovs system modify –vswitch_type=none But after unlocking the host each time, when the system boots up, all my services (openvswitch, neutron-openvswitch-agent and nova-compute) remains in dead-state. I need to manually start all the services, but even after starting the services, my compute remains in “degraded” state. Can’t I create a StarlingX Simplex 2018.10 Setup without DPDK support? Regards Anirudh Gupta (Senior Engineer) From: Xie, Cindy > Sent: 18 June 2019 15:30 To: Anirudh Gupta >; Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil? >; Jones, Bruce E >; Frank Miller >; Arce Moreno, Abraham >; Hazzim Anaya Casas? >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: StarlingX 2019.05 Release Queries Gupta, - We are still on track to release our release 2 (stx.2.0) on Aug’19 - StarlingX will be based on K8s from stx.2.0 release, and we will have another ISO released. If you want to try the ISO, you can get our daily build and get an idea of footprint. - Right now, we do not support the version upgrade from 2018.10 release to new release. You need to re-deploy your cluster. Thx. - cindy From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, June 18, 2019 5:11 PM To: Jones, Bruce E >; Winnicki, Chris >; Ildiko Vancsa >; Ghada Khalil >; Jones, Bruce E >; Frank Miller >; Xie, Cindy >; Arce Moreno, Abraham >; Hazzim Anaya Casas >; Hernandez Gonzalez, Fernando > Cc: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2019.05 Release Queries Hi All, It would be great if anyone can please address my below queries. I have prepared All in One Simplex/Duplex Setup on release 2018.10 and would like to further continue my work on the upcoming 2019.05 release. Going forward, I do have some queries: As per the recent update, the release 2019.05 is delayed and now is expected to be out in the month of August 2019. https://wiki.openstack.org/wiki/StarlingX/Release_Plan Is there any further update on the release date? As per the release notes of 2018.10, I am using Pre-built StarlingX Image https://docs.starlingx.io/releasenotes/index.html#release-notes http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Will the release 2019.05 be based on Kubernetes? If Yes, What will be the changes in the footprint? And will there be another ISO with Kubernetes support available for 2019.05 Release? As per the below links of 2018.10 and 2019.05, the Hardware requirements is same. https://docs.starlingx.io/deployment_guides/current/duplex.html https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.html So will there be any change in the Hardware Requirement for both the releases or they’ll remain unchanged? What efforts would be required if I need to upgrade my system from the current 2018.10 to the new 2019.05 release? Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ronald.Stone at windriver.com Mon Jun 24 11:45:50 2019 From: Ronald.Stone at windriver.com (Stone, Ronald) Date: Mon, 24 Jun 2019 11:45:50 +0000 Subject: [Starlingx-discuss] [docs] Git repo not found Message-ID: <90B8CFEDE03A6549A2DE0880F7B0DF610804AEF6@ALA-MBD.corp.ad.wrs.com> I am trying to follow the instructions here: https://docs.starlingx.io/deploy_install_guides/latest/aio_duplex/index.html#setting-up-the-workstation but encounter an error with the git pull: git clone https://git.starlingx.io/tools Cloning into 'tools'... fatal: repository 'https://opendev.org/tools/' not found Any assistance appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jun 24 19:01:43 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 24 Jun 2019 19:01:43 +0000 Subject: [Starlingx-discuss] [docs] Git repo not found In-Reply-To: <90B8CFEDE03A6549A2DE0880F7B0DF610804AEF6@ALA-MBD.corp.ad.wrs.com> References: <90B8CFEDE03A6549A2DE0880F7B0DF610804AEF6@ALA-MBD.corp.ad.wrs.com> Message-ID: <20190624190143.sislt56rryoe73gm@yuggoth.org> On 2019-06-24 11:45:50 +0000 (+0000), Stone, Ronald wrote: > I am trying to follow the instructions here: > https://docs.starlingx.io/deploy_install_guides/latest/aio_duplex/index.html#setting-up-the-workstation > > but encounter an error with the git pull: > > git clone https://git.starlingx.io/tools > Cloning into 'tools'... > fatal: repository 'https://opendev.org/tools/' not found [...] It looks like this document was added a month ago by https://review.opendev.org/659185 and the mistake went overlooked by reviewers. https://git.starlingx.io/tools was never a valid clone URL, it should be updated to say either https://git.starlingx.io/stx-tools or https://opendev.org/starlingx/tools instead. I've taken a stab at correcting it with https://review.opendev.org/667185 just now. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bruce.e.jones at intel.com Mon Jun 24 19:09:46 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 24 Jun 2019 19:09:46 +0000 Subject: [Starlingx-discuss] [docs] Git repo not found In-Reply-To: <20190624190143.sislt56rryoe73gm@yuggoth.org> References: <90B8CFEDE03A6549A2DE0880F7B0DF610804AEF6@ALA-MBD.corp.ad.wrs.com> <20190624190143.sislt56rryoe73gm@yuggoth.org> Message-ID: <9A85D2917C58154C960D95352B22818BD07754DF@fmsmsx123.amr.corp.intel.com> Thank you Jeremy! I just merged your fix. brucej -----Original Message----- From: Jeremy Stanley [mailto:fungi at yuggoth.org] Sent: Monday, June 24, 2019 12:02 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [docs] Git repo not found On 2019-06-24 11:45:50 +0000 (+0000), Stone, Ronald wrote: > I am trying to follow the instructions here: > https://docs.starlingx.io/deploy_install_guides/latest/aio_duplex/inde > x.html#setting-up-the-workstation > > but encounter an error with the git pull: > > git clone https://git.starlingx.io/tools Cloning into 'tools'... > fatal: repository 'https://opendev.org/tools/' not found [...] It looks like this document was added a month ago by https://review.opendev.org/659185 and the mistake went overlooked by reviewers. https://git.starlingx.io/tools was never a valid clone URL, it should be updated to say either https://git.starlingx.io/stx-tools or https://opendev.org/starlingx/tools instead. I've taken a stab at correcting it with https://review.opendev.org/667185 just now. -- Jeremy Stanley From maria.g.perez.ibarra at intel.com Tue Jun 25 01:18:31 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 25 Jun 2019 01:18:31 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190624 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-24 (link) Status: Green ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jun 25 12:44:40 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 25 Jun 2019 12:44:40 +0000 Subject: [Starlingx-discuss] Community Call (June 26, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A81113@ALA-MBD.corp.ad.wrs.com> Reminder of tomorrow's Community call, topics include... - MS-3 officially declared - policy for changes going forward - bug count / resolution forecast - updated wiki - first contact Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190626T1400 From cindy.xie at intel.com Tue Jun 25 13:49:54 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 25 Jun 2019 13:49:54 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FB1D1C@SHSMSX104.ccr.corp.intel.com> Agenda for 6/26 meeting: - stx.3.0 feature proposal (All) - Ceph test status report (Abraham/Fernando) - QAT test status report (Ricardo) - stx.2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin; Bin) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 26, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From ezpeerchen at gmail.com Tue Jun 25 02:39:07 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Tue, 25 Jun 2019 10:39:07 +0800 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEF7D6E@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7810@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7D6E@SHSMSX101.ccr.corp.intel.com> Message-ID: Dear Shuicheng, How could i know what cause my system shutdown? ================================================= controller-0:~$ cat /etc/build.info SW_VERSION="18.10" BUILD_TARGET="Unknown" BUILD_TYPE="Informal" BUILD_ID="n/a" JOB="n/a" BUILD_BY="builder" BUILD_NUMBER="n/a" BUILD_HOST="258041cdd9ff" BUILD_DATE="2018-11-10 23:06:44 +0000" BUILD_DIR="/" WRS_SRC_DIR="/localdisk/designer/builder/2018.10_src/cgcs-root" WRS_GIT_BRANCH="HEAD" CGCS_SRC_DIR="/localdisk/designer/builder/2018.10_src/cgcs-root/stx" CGCS_GIT_BRANCH="HEAD" controller-0:~$ ================================================= [image: image.png] Lin, Shuicheng 於 2019年6月24日 週一 下午7:40寫道: > Hi Ezpeer, > > It seems there is only 1 ISO in cengn for STX 1.0 release. > > Here it is: > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/ > > > > You could check whether your ISO is the same as this or not by looking at > the build date in “/etc/build.info”. > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Monday, June 24, 2019 10:16 AM > *To:* Lin, Shuicheng > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] STX 1.0 automatically shutdown and > power off. > > > > > > Dear Shuicheng, > > > Even i don't create VM , this issue still happened again. > > > > My steps didn't create VM. (Last email) > > > > Any stable STX 1.0 could download? or It is the latest version of STX 1.0. > > > > > > > Thanks a lot. > > > > > > Lin, Shuicheng 於 2019年6月22日 週六 上午11:27寫道: > > Hi Ezpeer, > > I checked the log, it is the same reason as previous. > > Have you tried to create VM without pci-sriov? Will the issue still occur? > > If not, then we could narrow down to focus on the pci-sriov part. > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Thursday, June 20, 2019 6:04 PM > *To:* Lin, Shuicheng > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] STX 1.0 automatically shutdown and > power off. > > > > > Dear Shuicheng, > > > Update new information > > > I reinstall STX 1.0 (2018/10). > > > > My installation Step: > > 1. Install STX 1.0 (2018/10) all-in-one simplex > > 2. Configure pci-sriov interface > > # source /etc/nova/openrc > #neutron providernet-create providernet-a --type=flat > #neutron providernet-create providernet-b --type=vlan > #neutron providernet-range-create --name providernet-b-range1 --range > 100-400 providernet-b > #system host-if-modify -c pci-sriov controller-0 enp2s0f0 -p > providernet-a -N 7 > #system host-if-modify -c data controller-0 enp2s0f1 -p providernet-b > > .....configure storage > > #system host-unlock controller-0 > > System auto reboot > > 3. After reboot to prompt, about 10-20 minutes issue occurred. > > > Log files: > https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS > > > > > > Best Regards > > > > Ezpeer Chen 於 2019年6月20日 週四 下午5:20寫道: > > Dear Shuicheng, > > > In my environment, i need to test pci-sriov feature. > > My installation Step: > > 1. Install STX 1.0 (2018/10) all-in-one simplex > > 2. Configure pci-sriov interface > > 3. Create network and VM on one vf port > > 4. Issue occurred > > > > > Best Regards > > > > > > Lin, Shuicheng 於 2019年6月20日 週四 下午4:41寫道: > > Hi Ezpeer, > > For the sm.log (/var/log/sm.log), it seems service process fail to run, > and lead to sm think system is in unhealthy state, and try to reboot system > to recover. > > It should be reboot, but not sure why it is shutdown in your environment > and also auth.log shows “systemd-logind[893]: info Power key > pressed./info Powering Off...” > > > > In the sm.log, I also see the network adapter is up/down randomly. It may > be the cause of the service process failure. > > I also find VF Ethernet is enabled in the system also. > > To isolate the issue cause, could you help try to disable the VF Ethernet > in BIOS, and use PF Ethernet only? > > Thanks. > > > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Thursday, June 20, 2019 2:27 PM > *To:* Lin, Shuicheng > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] STX 1.0 automatically shutdown and > power off. > > > > Dear Shuicheng, > > Platform: STX 1.0 ( > http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/latest_build/outputs/iso/ > ) > > > > 1. bare-metal system , MY host pc Controller-0 shutdown without reboot > > 2. provisioned system , all-in-one simplex > > 3. collet cmd log file: > > https://drive.google.com/open?id=1D7yw_IiCPrDBCn6GmPK1GDC7FAdo5Wcr > > > > Thanks for your help. > > > > Best Regards > > > > Lin, Shuicheng 於 2019年6月20日 週四 下午1:56寫道: > > Hi Ezpeer, > > Is it virtual machine or bare-metal system? > > Is it a provisioned system? And what is the system configuration, > simplex/duplex or multi node? > > StarlingX itself will not do auto-shutdown, but it may auto-reboot if > there is critical error. > > Most of StarlingX’s log is at /var/log folder. > > > > You could run “collect” cmd in your fail system after the issue occur, and > upload the generated logfile to somewhere others could access. > > Then I will have a check with it. > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *Sent:* Thursday, June 20, 2019 11:41 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] STX 1.0 automatically shutdown and power > off. > > > > Dear all, > > My STX 1.0 will be automatically shutdown and power off. > > Where could i check the logs about this issue? > > > Thanks a lot. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 88001 bytes Desc: not available URL: From shaxiaoz_7443 at qq.com Tue Jun 25 03:13:51 2019 From: shaxiaoz_7443 at qq.com (=?gb18030?B?NTA0NjI2Njg0?=) Date: Tue, 25 Jun 2019 11:13:51 +0800 Subject: [Starlingx-discuss] =?gb18030?b?x+u9zFN0YXJsaW5nWLXEyrnTw87KzOI=?= Message-ID: 您好: 我叫刘政,是一个程序员。最近在学习边缘计算和StarlingX。通过https://www.starlingx.io/学习StarlingX的文档,大概了解了StarlingX的结构和部署, 但还是没能明白StarlingX具体应该怎么用,那接下来我应该再看哪些资料。麻烦您那里有一些其他的资料或者Demo参考吗? 非常感谢,祝福一切都好。 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Tue Jun 25 05:23:11 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 25 Jun 2019 05:23:11 +0000 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7810@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7D6E@SHSMSX101.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608AD688@SHSMSX105.ccr.corp.intel.com> Hi Ezpeer, I assume you could create VM without issue if not configure pci-sriov. But for the pci-sriov issue, I don’t have much idea. Maybe you could have a try with another HW, or have a try with STX latest ISO. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Tuesday, June 25, 2019 10:39 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, How could i know what cause my system shutdown? ================================================= controller-0:~$ cat /etc/build.info SW_VERSION="18.10" BUILD_TARGET="Unknown" BUILD_TYPE="Informal" BUILD_ID="n/a" JOB="n/a" BUILD_BY="builder" BUILD_NUMBER="n/a" BUILD_HOST="258041cdd9ff" BUILD_DATE="2018-11-10 23:06:44 +0000" BUILD_DIR="/" WRS_SRC_DIR="/localdisk/designer/builder/2018.10_src/cgcs-root" WRS_GIT_BRANCH="HEAD" CGCS_SRC_DIR="/localdisk/designer/builder/2018.10_src/cgcs-root/stx" CGCS_GIT_BRANCH="HEAD" controller-0:~$ ================================================= [image.png] Lin, Shuicheng > 於 2019年6月24日 週一 下午7:40寫道: Hi Ezpeer, It seems there is only 1 ISO in cengn for STX 1.0 release. Here it is: http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/ You could check whether your ISO is the same as this or not by looking at the build date in “/etc/build.info”. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Monday, June 24, 2019 10:16 AM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, Even i don't create VM , this issue still happened again. My steps didn't create VM. (Last email) Any stable STX 1.0 could download? or It is the latest version of STX 1.0. Thanks a lot. Lin, Shuicheng > 於 2019年6月22日 週六 上午11:27寫道: Hi Ezpeer, I checked the log, it is the same reason as previous. Have you tried to create VM without pci-sriov? Will the issue still occur? If not, then we could narrow down to focus on the pci-sriov part. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 6:04 PM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, Update new information I reinstall STX 1.0 (2018/10). My installation Step: 1. Install STX 1.0 (2018/10) all-in-one simplex 2. Configure pci-sriov interface # source /etc/nova/openrc #neutron providernet-create providernet-a --type=flat #neutron providernet-create providernet-b --type=vlan #neutron providernet-range-create --name providernet-b-range1 --range 100-400 providernet-b #system host-if-modify -c pci-sriov controller-0 enp2s0f0 -p providernet-a -N 7 #system host-if-modify -c data controller-0 enp2s0f1 -p providernet-b .....configure storage #system host-unlock controller-0 System auto reboot 3. After reboot to prompt, about 10-20 minutes issue occurred. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS Best Regards Ezpeer Chen > 於 2019年6月20日 週四 下午5:20寫道: Dear Shuicheng, In my environment, i need to test pci-sriov feature. My installation Step: 1. Install STX 1.0 (2018/10) all-in-one simplex 2. Configure pci-sriov interface 3. Create network and VM on one vf port 4. Issue occurred Best Regards Lin, Shuicheng > 於 2019年6月20日 週四 下午4:41寫道: Hi Ezpeer, For the sm.log (/var/log/sm.log), it seems service process fail to run, and lead to sm think system is in unhealthy state, and try to reboot system to recover. It should be reboot, but not sure why it is shutdown in your environment and also auth.log shows “systemd-logind[893]: info Power key pressed./info Powering Off...” In the sm.log, I also see the network adapter is up/down randomly. It may be the cause of the service process failure. I also find VF Ethernet is enabled in the system also. To isolate the issue cause, could you help try to disable the VF Ethernet in BIOS, and use PF Ethernet only? Thanks. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 2:27 PM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, Platform: STX 1.0 ( http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/latest_build/outputs/iso/) 1. bare-metal system , MY host pc Controller-0 shutdown without reboot 2. provisioned system , all-in-one simplex 3. collet cmd log file: https://drive.google.com/open?id=1D7yw_IiCPrDBCn6GmPK1GDC7FAdo5Wcr Thanks for your help. Best Regards Lin, Shuicheng > 於 2019年6月20日 週四 下午1:56寫道: Hi Ezpeer, Is it virtual machine or bare-metal system? Is it a provisioned system? And what is the system configuration, simplex/duplex or multi node? StarlingX itself will not do auto-shutdown, but it may auto-reboot if there is critical error. Most of StarlingX’s log is at /var/log folder. You could run “collect” cmd in your fail system after the issue occur, and upload the generated logfile to somewhere others could access. Then I will have a check with it. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Thursday, June 20, 2019 11:41 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear all, My STX 1.0 will be automatically shutdown and power off. Where could i check the logs about this issue? Thanks a lot. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 258153 bytes Desc: image003.png URL: From bruce.e.jones at intel.com Tue Jun 25 15:13:39 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 25 Jun 2019 15:13:39 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting June 25 2019 Message-ID: <9A85D2917C58154C960D95352B22818BD0776393@fmsmsx123.amr.corp.intel.com> * Bruce to disappear for two months starting July 4th. Dean to run this meeting in his absence starting July 9th * Nova placement changes: * Merged in StarlingX. Status in openstack helm??? (https://review.opendev.org/#/c/662229/ please review these changes. ? We are running the separate placement now ? Please attend the helm weekly meeting to advocate for the pending patch. * Final helm override status? (Gerry) - one review out, two more to go - forecast for EO next week. * Nova branch rebase (Dean) * Gerry to take a look at the pending PR for review. Dean didn't build a container image or do any system level testing. * I have done a preliminary rebase of the stx-nova Stein branch into stx/stein.2 [0]. It is passing unit and functional tests but since some changes were required from the current upstream reviews it really needs to be checked out further. [0] https://github.com/starlingx-staging/stx-nova/pull/25 * I started with upstream stable/stein acd2daa9 (current as of yesterday noon-ish) * Artom rebased the upstream NUMA patches in master so I pulled the current patchset of those * There is a missing import in 635229 that is causing the test failures, I inserted the commit adding that inline in the PR * There were conflicts in 634605 and 634606 due to ongoing development in master since stable/stein was branched. I made the obvious corrections, there may be more required that someone who is not familiar with this code (me) would likely miss. * The final pep, unit and functional jobs are running under https://review.opendev.org/#/c/656065/8 and I expect them to pass. * I do believe this requires the extraced placement to be merged so we may not be able to test it in StarlingX until that is complete. I am hoping this can be tested with just replacing the Nova docker image but I do not have the time to run through that. * Orphan instance cleanup patches - has been reviewed by Sean Mooney but not getting reviewer attention now. * NUMA API patches - Alex has given +2 but also waiting for core reviewers * Bruce to ping Eric on both of these * Bugs * Bruce to ping Ricardo on https://bugs.launchpad.net/starlingx/+bug/1827692 * Bruce to ping Gerry on https://bugs.launchpad.net/starlingx/+bug/1829062 - can this be closed? And this one: https://bugs.launchpad.net/starlingx/+bug/1824167? * Bruce to ping Boxiang on https://bugs.launchpad.net/starlingx/+bug/1820882 and have it retested * Frank to ping AL on https://bugs.launchpad.net/starlingx/+bug/1817528 -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiongzhiwei at baicells.com Tue Jun 25 15:58:43 2019 From: xiongzhiwei at baicells.com (xiongzhiwei) Date: Tue, 25 Jun 2019 23:58:43 +0800 Subject: [Starlingx-discuss] =?utf-8?b?5Zue5aSNOiDor7fmlZlTdGFybGluZ1g=?= =?utf-8?b?55qE5L2/55So6Zeu6aKY?= In-Reply-To: References: Message-ID: <379baff5-a723-46c4-9932-9037b49f5390.xiongzhiwei@baicells.com> 我是二三月份研究过一阵,已有三个月没关注了,你可以找intel的人问问,他们是主力,主力团队在上海。有什么问题也可以在邮件列表中随时反馈 来自钉钉专属商务邮箱------------------------------------------------------------------ 发件人:504626684 日 期:2019年06月25日 11:13:51 收件人:starlingx-discuss 主 题:[Starlingx-discuss] 请教StarlingX的使用问题 您好: 我叫刘政,是一个程序员。最近在学习边缘计算和StarlingX。通过https://www.starlingx.io/学习StarlingX的文档,大概了解了StarlingX的结构和部署, 但还是没能明白StarlingX具体应该怎么用,那接下来我应该再看哪些资料。麻烦您那里有一些其他的资料或者Demo参考吗? 非常感谢,祝福一切都好。 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Tue Jun 25 16:04:00 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Tue, 25 Jun 2019 16:04:00 +0000 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7810@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7D6E@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608AD688@SHSMSX105.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC255E617@ALA-MBD.corp.ad.wrs.com> What type of server is this ? Brent From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Tuesday, June 25, 2019 1:23 AM To: Ezpeer Chen > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Hi Ezpeer, I assume you could create VM without issue if not configure pci-sriov. But for the pci-sriov issue, I don’t have much idea. Maybe you could have a try with another HW, or have a try with STX latest ISO. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Tuesday, June 25, 2019 10:39 AM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. Dear Shuicheng, How could i know what cause my system shutdown? ================================================= controller-0:~$ cat /etc/build.info SW_VERSION="18.10" BUILD_TARGET="Unknown" BUILD_TYPE="Informal" BUILD_ID="n/a" JOB="n/a" BUILD_BY="builder" BUILD_NUMBER="n/a" BUILD_HOST="258041cdd9ff" BUILD_DATE="2018-11-10 23:06:44 +0000" BUILD_DIR="/" WRS_SRC_DIR="/localdisk/designer/builder/2018.10_src/cgcs-root" WRS_GIT_BRANCH="HEAD" CGCS_SRC_DIR="/localdisk/designer/builder/2018.10_src/cgcs-root/stx" CGCS_GIT_BRANCH="HEAD" controller-0:~$ ================================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: From David.Sullivan at windriver.com Tue Jun 25 17:15:45 2019 From: David.Sullivan at windriver.com (Sullivan, David) Date: Tue, 25 Jun 2019 17:15:45 +0000 Subject: [Starlingx-discuss] [DOCS] Ability to add trusted CA certificate Message-ID: As part of this Launchpad there are changes that can impact documentation. Let me know if you require further details. --- Ability to add trusted CA This change allows the administrator to install a trusted CA certificate to the system. A trusted CA certificate will be required if the end user configures a private docker registry that is signed by an unknown Certificate Authority. The provided certificate must be in PEM format. There are two ways to add the trusted CA certificate. 1) Using the CLI system certificate-install -m ssl_ca This will install the certificate on all hosts. New configurations are applied to the hosts to install the certificates. 250.001 (Host Configuration is out-of-date) alarms are raised for those nodes and are cleared as the new configurations are applied. Sample output: system certificate-install -m ssl_ca domain.crt WARNING: For security reasons, the original certificate, containing the private key, will be removed, once the private key is processed. +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | c986249f-b304-4ab4-b88e-14f92e75269d | | certtype | ssl_ca | | signature | ssl_ca_14617336624230451058 | | start_date | 2019-05-22 18:24:41+00:00 | | expiry_date | 2020-05-21 18:24:41+00:00 | +-------------+--------------------------------------+ 2) Using the ansible bootstrap playbook A new optional variable is supported by the bootstrap playbook: ssl_ca_cert. The value of this variable is the full path to the certificate. eg ssl_ca_cert: /path/to/ssl_ca_cert_file When this variable is provided the playbook will install the CA certificate. At this time installing the ssl_ca_cert is not supported during bootstrap replay. Reviews: https://review.opendev.org/#/c/663797/ https://review.opendev.org/#/c/665298/ https://review.opendev.org/#/c/665986/ Launchpad: https://bugs.launchpad.net/starlingx/+bug/1831946 -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Tue Jun 25 17:18:23 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Tue, 25 Jun 2019 17:18:23 +0000 Subject: [Starlingx-discuss] [DOCS] Ability to add trusted CA certificate In-Reply-To: References: Message-ID: <3808363B39586544A6839C76CF81445EA1B7E782@ORSMSX104.amr.corp.intel.com> Thanks David. We'll address this in our docs meeting tomorrow and will get back to you if we have questions. -- Mike ________________________________ From: Sullivan, David [David.Sullivan at windriver.com] Sent: Tuesday, June 25, 2019 10:15 AM To: starlingx-discuss at lists.starlingx.io; Tullis, Michael L Subject: [DOCS] Ability to add trusted CA certificate As part of this Launchpad there are changes that can impact documentation. Let me know if you require further details. --- Ability to add trusted CA This change allows the administrator to install a trusted CA certificate to the system. A trusted CA certificate will be required if the end user configures a private docker registry that is signed by an unknown Certificate Authority. The provided certificate must be in PEM format. There are two ways to add the trusted CA certificate. 1) Using the CLI system certificate-install -m ssl_ca This will install the certificate on all hosts. New configurations are applied to the hosts to install the certificates. 250.001 (Host Configuration is out-of-date) alarms are raised for those nodes and are cleared as the new configurations are applied. Sample output: system certificate-install -m ssl_ca domain.crt WARNING: For security reasons, the original certificate, containing the private key, will be removed, once the private key is processed. +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | c986249f-b304-4ab4-b88e-14f92e75269d | | certtype | ssl_ca | | signature | ssl_ca_14617336624230451058 | | start_date | 2019-05-22 18:24:41+00:00 | | expiry_date | 2020-05-21 18:24:41+00:00 | +-------------+--------------------------------------+ 2) Using the ansible bootstrap playbook A new optional variable is supported by the bootstrap playbook: ssl_ca_cert. The value of this variable is the full path to the certificate. eg ssl_ca_cert: /path/to/ssl_ca_cert_file When this variable is provided the playbook will install the CA certificate. At this time installing the ssl_ca_cert is not supported during bootstrap replay. Reviews: https://review.opendev.org/#/c/663797/ https://review.opendev.org/#/c/665298/ https://review.opendev.org/#/c/665986/ Launchpad: https://bugs.launchpad.net/starlingx/+bug/1831946 -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Jun 25 22:54:44 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 25 Jun 2019 22:54:44 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 6/25/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID="20190621T013000Z" from (link) ---------------------------------------------------------------------- Overall Results: Total = 421 Pass = 108 Fail = 6 Blocked = 1 Pass Rate = 25.6% ---------------------------------------------------------------------- Results per Domain: Regression - AIO-SX 21 PASS |1 FAIL|1 BLOCKED Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 12 PASS Regression - FM Regression - HA Regression - Heat 10 PASS Regression - Horizon 1 PASS Regression - Install and Config Regression - Maintenance Regression - Networking 32 PASS Regression - Nova Regression - Security 15 PASS | 4 FAIL Regression - Storage Regression - Inventory 17 PASS | 1 FAIL System Test --------------------------------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX : https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted : https://bugs.launchpad.net/starlingx/+bug/1833469 removing attributes from bash.log should not be possible : https://bugs.launchpad.net/starlingx/+bug/1833619 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Jun 25 23:26:59 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 25 Jun 2019 23:26:59 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190625 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-25 (link) Status: Green ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jun 26 01:03:26 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 26 Jun 2019 01:03:26 +0000 Subject: [Starlingx-discuss] =?utf-8?b?5Zue5aSNOiDor7fmlZlTdGFybGluZ1g=?= =?utf-8?b?55qE5L2/55So6Zeu6aKY?= In-Reply-To: <379baff5-a723-46c4-9932-9037b49f5390.xiongzhiwei@baicells.com> References: <379baff5-a723-46c4-9932-9037b49f5390.xiongzhiwei@baicells.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FB24BD@SHSMSX104.ccr.corp.intel.com> Hi, Liu Zheng, Not sure if you’ve downloaded an ISO from Cengen server, there is daily build we’ve published – select the one we have “green” sanity results and follow the wiki page for deployment. You may have to setup some proxy or local registry if you’re in PRC due to firewall issue. Please use this mailing list for any issues you encountered. Thx. - cindy From: xiongzhiwei [mailto:xiongzhiwei at baicells.com] Sent: Tuesday, June 25, 2019 11:59 PM To: 504626684 ; starlingx-discuss Subject: [Starlingx-discuss] 回复: 请教StarlingX的使用问题 我是二三月份研究过一阵,已有三个月没关注了,你可以找intel的人问问,他们是主力,主力团队在上海。有什么问题也可以在邮件列表中随时反馈 来自钉钉专属商务邮箱 ------------------------------------------------------------------ 发件人:504626684> 日 期:2019年06月25日 11:13:51 收件人:starlingx-discuss> 主 题:[Starlingx-discuss] 请教StarlingX的使用问题 您好: 我叫刘政,是一个程序员。最近在学习边缘计算和StarlingX。通过https://www.starlingx.io/学习StarlingX的文档,大概了解了StarlingX的结构和部署, 但还是没能明白StarlingX具体应该怎么用,那接下来我应该再看哪些资料。麻烦您那里有一些其他的资料或者Demo参考吗? 非常感谢,祝福一切都好。 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sxmatch1986 at gmail.com Wed Jun 26 03:56:14 2019 From: sxmatch1986 at gmail.com (hao wang) Date: Wed, 26 Jun 2019 11:56:14 +0800 Subject: [Starlingx-discuss] [StarlingX-discuss]How to relate the company's name and mail address in starlingx.biterg.io? Message-ID: Hi, Now we make statistics for community contributions in starlingx.biterg.io. But how to relate the company's name and mail address that developer using? For example, I use Gmail to commit patch but seems can't relate my company's name Fiberhome. Is there any way to change the configuration of affiliation, like OpenStack doing? From ezpeerchen at gmail.com Wed Jun 26 07:56:24 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Wed, 26 Jun 2019 15:56:24 +0800 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: References: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D4F@SHSMSX101.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C101E07A@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D94@SHSMSX101.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C101E0E1@ALA-MBD.corp.ad.wrs.com> Message-ID: Dear all, Will the fault management to do actions (reboot or shutdown) based on BMC's sensor status ? Thanks Qian, Bin 於 2019年6月25日 週二 上午2:23寫道: > SM only requires mtce to reboot a failed controller during a failover, > i.e, in a duplex environment, the survivor controller requires mtce to > reboot the failed controller. > SM won't require reboot when there is only 1 controller available (e.g an > aio simplex). > > Bin > ------------------------------ > *From:* MacDonald, Eric [Eric.MacDonald at windriver.com] > *Sent:* Monday, June 24, 2019 5:15 AM > *To:* Lin, Shuicheng; Liu, Tao; Ezpeer Chen; > starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] How to turn off fault management? > > Yes, SM can request mtce to reboot a controller and there will be an > explicit mtce log for that (below) > > > > controller-? is being force failed by SM > > > > *From:* Lin, Shuicheng [mailto:shuicheng.lin at intel.com] > *Sent:* Monday, June 24, 2019 7:48 AM > *To:* MacDonald, Eric; Liu, Tao; Ezpeer Chen; > starlingx-discuss at lists.starlingx.io > *Subject:* RE: [Starlingx-discuss] How to turn off fault management? > *Importance:* High > > > > Hi Eric, > > I guess you mean sm should reboot a host, not shutdown. Is it correct? > > But per Ezpeer’s description, it seems like shutdown. > > BTW, it is with STX 1.0, from 2010.10 release branch. > > > > Best Regards > > Shuicheng > > > > *From:* MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > *Sent:* Monday, June 24, 2019 7:40 PM > *To:* Lin, Shuicheng ; Liu, Tao < > Tao.Liu at windriver.com>; Ezpeer Chen ; > starlingx-discuss at lists.starlingx.io > *Subject:* RE: [Starlingx-discuss] How to turn off fault management? > > > > SM does not power off a host. > > When you say shutdown you mean power off correct ? > > > > *From:* Lin, Shuicheng [mailto:shuicheng.lin at intel.com > ] > *Sent:* Monday, June 24, 2019 7:38 AM > *To:* MacDonald, Eric; Liu, Tao; Ezpeer Chen; > starlingx-discuss at lists.starlingx.io > *Subject:* RE: [Starlingx-discuss] How to turn off fault management? > *Importance:* High > > > > Hi Eric, > > Here is the collect log for the issue. > > Log files: > https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS > > You could find the issue reproduce step in below mail: > > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/005033.html > > > > It seems it is pci-sriov related issue. From the sm.log, it seems Ethernet > interface is not stable, and cause several services cannot run > successfully, and lead to the shutdown. > > Maybe you could provide some workaround suggestion for him. > > Thanks. > > > > Best Regards > > Shuicheng > > > > *From:* MacDonald, Eric [mailto:Eric.MacDonald at windriver.com > ] > *Sent:* Monday, June 24, 2019 7:18 PM > *To:* Liu, Tao ; Ezpeer Chen ; > starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] How to turn off fault management? > > > > Hi Ezpeer, > > > > In addition to Tao’s point … > > > > The only time maintenance will power off a host outside of explicit > administrative action is if that host’s board management controller is > provisioned, the critical action for a sensor group has been changed to > power cycle AND a sensor in that sensor group reports a debounced critical > severity. > > > > If you are experiencing heartbeat failures that you are trying to debug > you can change the heartbeat failure action to ‘degrade’ or ‘alarm’ only to > avoid the recovery reboot. Not recommended, but available for debug. > > Ø system service-parameter-modify platform maintenance > heartbeat_failure_action=degrade > > Ø system service-parameter-apply platform > > > > Locking a host will prevent host watchdog reboot due to quorum process > failure or watchdog pet failure/timeout > > > > If you are experiencing autonomous host power off then I would look at the > BMC logs for critical or fatal event reports. > > > > Eric. > > > > *From:* Liu, Tao [mailto:Tao.Liu at windriver.com ] > *Sent:* Friday, June 21, 2019 9:33 AM > *To:* Ezpeer Chen; starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] How to turn off fault management? > > > > Hi Ezpeer, > > > > The fault management reports fault conditions and significant events in > the system and it does not reboot or power off the controller. The > maintenance system takes proper actions to recover the system When > necessary. > > > > I suggest you to view the active alarms and event history to see what > failures might lead to reboot the controller for recovery. (could it be a > configuration failure?). > > fm alarm-list > > fm event-list > > > > In addition, /var/log/mtcAgent.log provides more details on why the host > is reboot or power-off. > > > > Regards, > > Tao > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com ] > *Sent:* Friday, June 21, 2019 4:08 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] How to turn off fault management? > > > > Dear all, > > Environment: STX 1.0 (2018/10) all-in-one simplex > > > How could i turn off fault management which cause my system(controller-0) > reboot or power-off? > > > > Best Regards > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Wed Jun 26 11:53:37 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Wed, 26 Jun 2019 11:53:37 +0000 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: References: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D4F@SHSMSX101.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C101E07A@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D94@SHSMSX101.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C101E0E1@ALA-MBD.corp.ad.wrs.com> Message-ID: <210898B96CA058408C55992CCAD98676C101E8E3@ALA-MBD.corp.ad.wrs.com> As mentioned below “The only time maintenance will power off a host outside of explicit administrative action is if that host’s board management controller is provisioned, the critical action for a sensor group has been changed to power cycle AND a sensor in that sensor group reports a debounced critical severity.” You would see customer logs to this effect. Eric. From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Wednesday, June 26, 2019 3:56 AM To: Qian, Bin Cc: MacDonald, Eric; Lin, Shuicheng; Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Dear all, Will the fault management to do actions (reboot or shutdown) based on BMC's sensor status ? Thanks Qian, Bin > 於 2019年6月25日 週二 上午2:23寫道: SM only requires mtce to reboot a failed controller during a failover, i.e, in a duplex environment, the survivor controller requires mtce to reboot the failed controller. SM won't require reboot when there is only 1 controller available (e.g an aio simplex). Bin ________________________________ From: MacDonald, Eric [Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 5:15 AM To: Lin, Shuicheng; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Yes, SM can request mtce to reboot a controller and there will be an explicit mtce log for that (below) controller-? is being force failed by SM From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, June 24, 2019 7:48 AM To: MacDonald, Eric; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? Importance: High Hi Eric, I guess you mean sm should reboot a host, not shutdown. Is it correct? But per Ezpeer’s description, it seems like shutdown. BTW, it is with STX 1.0, from 2010.10 release branch. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:40 PM To: Lin, Shuicheng >; Liu, Tao >; Ezpeer Chen >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? SM does not power off a host. When you say shutdown you mean power off correct ? From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, June 24, 2019 7:38 AM To: MacDonald, Eric; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to turn off fault management? Importance: High Hi Eric, Here is the collect log for the issue. Log files: https://drive.google.com/open?id=1QViho0khiMQpYDOcF5ACZV2cwymgNEMS You could find the issue reproduce step in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/005033.html It seems it is pci-sriov related issue. From the sm.log, it seems Ethernet interface is not stable, and cause several services cannot run successfully, and lead to the shutdown. Maybe you could provide some workaround suggestion for him. Thanks. Best Regards Shuicheng From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 7:18 PM To: Liu, Tao >; Ezpeer Chen >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, In addition to Tao’s point … The only time maintenance will power off a host outside of explicit administrative action is if that host’s board management controller is provisioned, the critical action for a sensor group has been changed to power cycle AND a sensor in that sensor group reports a debounced critical severity. If you are experiencing heartbeat failures that you are trying to debug you can change the heartbeat failure action to ‘degrade’ or ‘alarm’ only to avoid the recovery reboot. Not recommended, but available for debug. > system service-parameter-modify platform maintenance heartbeat_failure_action=degrade > system service-parameter-apply platform Locking a host will prevent host watchdog reboot due to quorum process failure or watchdog pet failure/timeout If you are experiencing autonomous host power off then I would look at the BMC logs for critical or fatal event reports. Eric. From: Liu, Tao [mailto:Tao.Liu at windriver.com] Sent: Friday, June 21, 2019 9:33 AM To: Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Hi Ezpeer, The fault management reports fault conditions and significant events in the system and it does not reboot or power off the controller. The maintenance system takes proper actions to recover the system When necessary. I suggest you to view the active alarms and event history to see what failures might lead to reboot the controller for recovery. (could it be a configuration failure?). fm alarm-list fm event-list In addition, /var/log/mtcAgent.log provides more details on why the host is reboot or power-off. Regards, Tao From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 21, 2019 4:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to turn off fault management? Dear all, Environment: STX 1.0 (2018/10) all-in-one simplex How could i turn off fault management which cause my system(controller-0) reboot or power-off? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jun 26 11:58:37 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 26 Jun 2019 11:58:37 +0000 Subject: [Starlingx-discuss] [StarlingX-discuss]How to relate the company's name and mail address in starlingx.biterg.io? In-Reply-To: References: Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A81867@ALA-MBD.corp.ad.wrs.com> I know that Ildiko & Thierry talked about the fact that they do track this during the bitergia overview a couple of weeks ago - I think if you have your company name in Gerrit, it'll get picked up from there. -----Original Message----- From: hao wang Sent: Tuesday, June 25, 2019 11:56 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [StarlingX-discuss]How to relate the company's name and mail address in starlingx.biterg.io? Hi, Now we make statistics for community contributions in starlingx.biterg.io. But how to relate the company's name and mail address that developer using? For example, I use Gmail to commit patch but seems can't relate my company's name Fiberhome. Is there any way to change the configuration of affiliation, like OpenStack doing? _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Brent.Rowsell at windriver.com Wed Jun 26 12:10:37 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 26 Jun 2019 12:10:37 +0000 Subject: [Starlingx-discuss] How to turn off fault management? In-Reply-To: <210898B96CA058408C55992CCAD98676C101E8E3@ALA-MBD.corp.ad.wrs.com> References: <7242A3DC72E453498E3D783BBB134C3EA4E3346C@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101DFF8@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D4F@SHSMSX101.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C101E07A@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FEF7D94@SHSMSX101.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C101E0E1@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101E8E3@ALA-MBD.corp.ad.wrs.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC255FB1B@ALA-MBD.corp.ad.wrs.com> I think the focus here should be why it is rebooting vs disabling functionality. Do we know what the specific failure is ? Brent From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Wednesday, June 26, 2019 7:54 AM To: Ezpeer Chen ; Qian, Bin Cc: Lin, Shuicheng ; Liu, Tao ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? As mentioned below “The only time maintenance will power off a host outside of explicit administrative action is if that host’s board management controller is provisioned, the critical action for a sensor group has been changed to power cycle AND a sensor in that sensor group reports a debounced critical severity.” You would see customer logs to this effect. Eric. From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Wednesday, June 26, 2019 3:56 AM To: Qian, Bin Cc: MacDonald, Eric; Lin, Shuicheng; Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Dear all, Will the fault management to do actions (reboot or shutdown) based on BMC's sensor status ? Thanks Qian, Bin > 於 2019年6月25日 週二 上午2:23寫道: SM only requires mtce to reboot a failed controller during a failover, i.e, in a duplex environment, the survivor controller requires mtce to reboot the failed controller. SM won't require reboot when there is only 1 controller available (e.g an aio simplex). Bin ________________________________ From: MacDonald, Eric [Eric.MacDonald at windriver.com] Sent: Monday, June 24, 2019 5:15 AM To: Lin, Shuicheng; Liu, Tao; Ezpeer Chen; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to turn off fault management? Yes, SM can request mtce to reboot a controller and there will be an explicit mtce log for that (below) controller-? is being force failed by SM -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Jun 26 12:37:15 2019 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 26 Jun 2019 14:37:15 +0200 Subject: [Starlingx-discuss] [StarlingX-discuss]How to relate the company's name and mail address in starlingx.biterg.io? In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007A81867@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007A81867@ALA-MBD.corp.ad.wrs.com> Message-ID: At this point setting affiliations in the Bitergia dashboard is still very much a manual process, and it is not tied to anything else (Gerrit or Foundation profile). We are working with Bitergia to streamline the process, but for the moment, the best is to post your affiliation to this list so that we can fix it manually. I'll associate sxmatch1986 at gmail.com with Fiberhome. Zvonar, Bill wrote: > I know that Ildiko & Thierry talked about the fact that they do track this during the bitergia overview a couple of weeks ago - I think if you have your company name in Gerrit, it'll get picked up from there. > > -----Original Message----- > From: hao wang > Sent: Tuesday, June 25, 2019 11:56 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [StarlingX-discuss]How to relate the company's name and mail address in starlingx.biterg.io? > > Hi, > > Now we make statistics for community contributions in starlingx.biterg.io. But how to relate the company's name and mail address that developer using? For example, I use Gmail to commit patch but seems can't relate my company's name Fiberhome. Is there any way to change the configuration of affiliation, like OpenStack doing? -- Thierry Carrez (ttx) From cindy.xie at intel.com Wed Jun 26 13:44:06 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 26 Jun 2019 13:44:06 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FB2DC2@SHSMSX104.ccr.corp.intel.com> Agenda for 6/26 meeting: - stx.3.0 feature proposal (All) - Redfish support (https://storyboard.openstack.org/#!/story/2005861) - Python2to3 transition (https://wiki.openstack.org/wiki/StarlingX/Python2, story to be created) - non-Openstack patch cleanup (TBD) - Ceph containerization (https://storyboard.openstack.org/#!/story/2005527, spec under review: https://review.opendev.org/#/c/656371/), sizing the effort before we commit the feature into 3.0 timeline. Propose staging option. - Support Kata container (story to be created), not sure yet if this fits into 3.0. first step is to propose TSC and initiate the discussion. Best to put the discussion/proposal onto the mailing list. - QAT support in Cinder & Glance (story to be created). AR to Cindy to discussion w/ Vivian for details. - systemd standardization. Saul: found from multi-OS effort, systemd was used to launch services, there is some services used sysInv instead of systemd. Needs to standarization to use systemd and move away from hybrid mode.AR to Saul and Marcela to send the technical proposal to mailing list and create SB. Each team members can bring up your proposed work items so that we can bring up to TSC for discussion & approval. - Ceph test status report (Abraham/Fernando) test status tracking: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1145711595 16 pass, 6 blocked, 3 retest 2 patches to address task 30351 under SB#2003909 have been merged yesterday. 4 P1 tests shall be unblocked. AR: Abraham/Fernando to test the 4 P1 cases with the latest build. - QAT test status report (Ricardo) test status tracking: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=84126711 Moving to embedded QAT devices but not yet replicate the success on PCIe card yet. Will continue to preopare the HW and continue the testing. Already have instructions from Shuicheng, expect to finish the setup by this week. RESTAPI disable/enable - pending Numan's advice for howto. - stx.2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin; Bin) - 1829844 : patch available, under review - 1829855 , patch WIP will upload today. - 1830191 , coding done, working on dev testing will post for review tomorrow. - 1831300 , WIP and need retest. One problem is that we are not on latest mimic version. -1830938, maybe an upstream Ceph issue, cherry pick might be required. -1832854, under debug, root cause not clear yet. -1827258 /1832647, kernel log checked, when OOM found the system allocated >14G huge page memory but the whole system memory is only 16G. From Bin's analysis, there are 7000 huge pages (each huge page size is 2M).When OOM happens, the system was attempt to allocate 4K pages but failed. From Brent, the system reserve 14.5G for system usage, only if all those 14.5G are used up, then OOM will happen. - Opens (all) -----Original Message----- From: Xie, Cindy Sent: Tuesday, June 25, 2019 9:50 PM To: starlingx-discuss at lists.starlingx.io; Rowsell, Brent ; Wold, Saul Subject: RE: Weekly StarlingX non-OpenStack distro meeting Agenda for 6/26 meeting: - stx.3.0 feature proposal (All) - Ceph test status report (Abraham/Fernando) - QAT test status report (Ricardo) - stx.2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin; Bin) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 26, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Al.Bailey at windriver.com Wed Jun 26 16:48:31 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Wed, 26 Jun 2019 16:48:31 +0000 Subject: [Starlingx-discuss] Docker image list In-Reply-To: References: <4A1AD93A-CF1E-407C-8BB5-168B64EB7911@intel.com> Message-ID: I don't think all of those images are downloaded during the application-apply, due to overrides selecting a more recent version. The three phases are where images are downloaded are: - Initial phase (ansible) - Platform-integ-apps (after unlock) - Stx-openstack (after application-apply) Would authoring and maintaining a document with these images and versions be a decent way to manage this. (probably in starlingx/config) Or is running scripts the preferred approach? The readme.rst for starlingx/config is light on contents at the moment https://opendev.org/starlingx/config/src/branch/master/README.rst Al -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Monday, June 24, 2019 10:50 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docker image list On Mon, Jun 24, 2019 at 8:59 AM Cordoba Malibran, Erich wrote: > docker.io/openstackhelm/magnum:ocata > docker.io/kolla/ubuntu-source-aodh-evaluator:ocata > docker.io/openstackhelm/heat:newton > docker.io/kolla/ubuntu-source-ceilometer-api:ocata > docker.io/openstackhelm/horizon:ocata [...] Why are these showing up with 'ocata' and 'newton' in the names/tags? Presumably we are not using Ocata or Newton releases. Is this an artifact of how OpenStack-Helm works? dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Jun 26 17:00:04 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 26 Jun 2019 10:00:04 -0700 Subject: [Starlingx-discuss] Docker image list In-Reply-To: References: <4A1AD93A-CF1E-407C-8BB5-168B64EB7911@intel.com> Message-ID: <3e148976-4418-d9d2-da1a-a48333e94bf4@linux.intel.com> On 6/26/19 9:48 AM, Bailey, Henry Albert (Al) wrote: > I don't think all of those images are downloaded during the application-apply, due to overrides selecting a more recent version. > > The three phases are where images are downloaded are: > > - Initial phase (ansible) > - Platform-integ-apps (after unlock) > - Stx-openstack (after application-apply) > > Would authoring and maintaining a document with these images and versions be a decent way to manage this. (probably in starlingx/config) > Or is running scripts the preferred approach? > I think that since this is dynamic information, it might be better as a script than a fixed document. Sau! > The readme.rst for starlingx/config is light on contents at the moment > https://opendev.org/starlingx/config/src/branch/master/README.rst > > Al > > > -----Original Message----- > From: Dean Troyer [mailto:dtroyer at gmail.com] > Sent: Monday, June 24, 2019 10:50 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Docker image list > > On Mon, Jun 24, 2019 at 8:59 AM Cordoba Malibran, Erich > wrote: >> docker.io/openstackhelm/magnum:ocata >> docker.io/kolla/ubuntu-source-aodh-evaluator:ocata >> docker.io/openstackhelm/heat:newton >> docker.io/kolla/ubuntu-source-ceilometer-api:ocata >> docker.io/openstackhelm/horizon:ocata > [...] > > Why are these showing up with 'ocata' and 'newton' in the names/tags? > Presumably we are not using Ocata or Newton releases. Is this an > artifact of how OpenStack-Helm works? > > dt > From Bill.Zvonar at windriver.com Wed Jun 26 17:29:03 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 26 Jun 2019 17:29:03 +0000 Subject: [Starlingx-discuss] Community Call (June 26, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A81AA7@ALA-MBD.corp.ad.wrs.com> Notes & actions from today's call... MS-3 is official! - Thank you everyone! - Reminder to clean up storyboard: https://storyboard.openstack.org/#!/story/list?status=active&project_group_id=86&tags=stx.2.0 - reminder of code merge guidelines until RC1. This was discussed and sent out previously. - Priority goes to stx.2.0 bug fixes and stx.2.0 MS-3 exceptions (above) - For code unrelated to stx.2.0 (code for deferred items to stx.3.0, new stx.3.0 features, enhancements), only passive/disabled code should be merged in master until the stx.2.0 RC1 branch is created (Aug 5). We will leave it to the judgment of the technical leads and core reviewers to determine if code is safe to merge. If you need an opinion, the release planning team is happy to help (Ghada, Bill and Bruce). Feedback on the new wiki has been positive (Bruce). - Any objections to me making https://wiki.openstack.org/wiki/StarlingX/Draft_new_wiki_home_page the new wiki home page? - we agreed to go ahead with the new wiki, Bruce will update it later today, and include a link to the old wiki off the new page regression status - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/005094.html - the 115 testcases that were run were all run manually - regression reports will be sent twice a week - Tuesdays & Thursdays - per Numan, we've started automated regression as well - that'll be an additional 1100+ testcases, over & above the 421 manual that Maria's reporting on - ACTION: Numan & Ada to sort out how they aggregate reporting will be done (manual & automated) automation framework (Numan) - thanks to Bruce, Saul & Erich et al for reviewing the automation framework that Yang is making available to the Community - once it's up, folks can start using it - it includes a good number of testcases - ACTION: Numan/Yang arrange an info session for the Community (in a few weeks after Yang's vacation) defect trend (Bill) - marginally better than last week, but still a major cause of concern Docker image list: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/005063.html - per Brent / Frank, Al Bailey's going to work on this - he has an action to get a forecast for when it can be done - ACTION: Frank update on the forecast for this stx.3.0 next steps (Ghada) - Tentative Release Dates show MS-1 as July 17 -- based on a 6wk offset from the openstack train release - https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=0 - MS-1 criteria - https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Milestones - Release priorities and major features defined. - a list of features - TSC is working through an overall list (etherpad) - the list for 3.0 will ultimately be tracked on the Release Plan google sheet - High level resourcing secured. - an intended resource plan for each feature that's on the list - we agreed to remove the word "tentative" from the 3.0 dates - those are the dates now - Release Candidates: https://etherpad.openstack.org/p/stx-r3-feature-candidates - So far, 4 features are marked as "Agreed for R3 / Do in R3": - Containerization Deferrals: - Backup and Restore - Containerized FM - Containerized openstack clients - CPU manager static policy for non-openstack worker nodes - Distributed Cloud - Upgrade from R2->R3 - Openstack Train Release Support - Python 2->3 cutover - Some specs are in progress: https://review.opendev.org/#/q/project:starlingx/specs+status:ope- - Containerized Ceph - Time Sensitive Networking - Infrastructure and Cluster Monitoring - OVS-DPDK Containerization - Redfish (coming soon) - Other stuff in progress - MultiOS w/ focus on OpenSuse - K8s device plugin integration - Intel GPU >> initial review underway. - Nvidia GPU - Intel FPGA >> investigation starting / spec to follow. - QAT - 17 stories in Storyboard: https://storyboard.openstack.org/#!/story/list?status=active&project_group_id=86&tags=stx.3.0 - This covers some distro.other patch reduction work items (deferred from stx.2.0) - How do we move the planning forward? Can we achieve MS-1 on July 17? - we agreed to stay the course - look through the candidate list in the next TSC meeting first contact (Bill) - ACTION: Bill start checking if any 'new' people emails are going unresponded updates on actions - sanity - still not boring, but getting greener - we'll keep it in view - list of stx-sanity tagged lps: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.sanity&orderby=-datecreated&start=0 - big files - ACTION: Scott & Dean to talk about the mechanics of how this would work given a big honkin' storage somewhere - ACTION: Frank to talk to CENGN about getting sufficient space (pending any other parameters from Scott) - Mailing list post size (Brent) - Brent's had mailing list replies blocked due to size restrictions - per Dean, the limit is ~64k, he'll talk to Jeremy about making it higher - but it may be a global OpenStack limit - ACTION: Dean find out what our options for increasing this are - Mid-cycle Meeting? - (Saul) - possibly a mid-cycle meeting in September in Ottawa, align with some training sessions that are planned - ACTION: Bill check with Ian about the logistics/timing - stx.1.0 installation documentation (Cindy) - are we preserving the installation procedures for stx.1.0 as we work on the 2.0 - ACTION: Bruce (or Doc team) let us know if there is such a thing -----Original Message----- From: Zvonar, Bill Sent: Tuesday, June 25, 2019 8:45 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (June 26, 2019) Reminder of tomorrow's Community call, topics include... - MS-3 officially declared - policy for changes going forward - bug count / resolution forecast - updated wiki - first contact Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190626T1400 From jimmy at openstack.org Wed Jun 26 17:48:11 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 26 Jun 2019 12:48:11 -0500 Subject: [Starlingx-discuss] CFP Deadline for Open Infrastructure Summit Shanghai Message-ID: <5D13AFDB.2090306@openstack.org> Hi Everyone! The July 2 deadline to submit a presentation [1] for the Open Infrastructure Summit [2] in Shanghai is in less than one week! Submit your session today and join the global community in Shanghai, November 4-6, 2019. Sessions will be presented in both Mandarin and English, so you may submit your presentation in either language. Submit your presentations, panels, and hands-on workshops [3] before July 2 at 11:59 pm PT (July 3, 2019 at 15:00 China Standard Time). Tracks [4]: Container Infrastructure Hands-on Workshops AI, Machine Learning & HPC Private & Hybrid Cloud Public Cloud 5G, NFV & Edge Open Development Getting Started CI/CD Security Upcoming Shanghai Summit Deadlines * Register now [5] before the early bird registration deadline in early August (USD or RMB options available) * Apply for Travel Support [6] before August 8. For more information on the Travel Support Program, go here [7]. * Interested in sponsoring the Summit? [8]. * The content submission process for the Forum and Project Teams Gathering will be managed separately in the upcoming months. We look forward to your submissions! Cheers, Jimmy [1] https://cfp.openstack.org/ [2] https://www.openstack.org/summit/shanghai-2019/ [3] https://cfp.openstack.org/ [4] https://www.openstack.org/summit/shanghai-2019/summit-categories/ [5] https://www.openstack.org/summit/shanghai-2019/ [6] https://openstackfoundation.formstack.com/forms/travelsupportshanghai [7] https://www.openstack.org/summit/shanghai-2019/travel/ [8] https://www.openstack.org/summit/shanghai-2019/sponsors/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Jun 26 17:54:57 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 26 Jun 2019 17:54:57 +0000 Subject: [Starlingx-discuss] Wiki update Message-ID: <9A85D2917C58154C960D95352B22818BD07775F1@fmsmsx123.amr.corp.intel.com> The new wiki home page is live! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Jun 26 20:59:18 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 26 Jun 2019 20:59:18 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 6/26/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B7F026@ORSMSX104.amr.corp.intel.com> Greg Waines (copied) was nominated as a core reviewer for docs, and the team is in full agreement, ratifying this nomination. Congratulations Greg! For other notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Jun 26 22:22:08 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 26 Jun 2019 22:22:08 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190626 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-26 (link) Status: Green ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Jun 27 03:26:42 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 27 Jun 2019 03:26:42 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 In-Reply-To: <371DF9A763E9F44F924F4A821FC070264D0D83B3@SHSMSX105.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FB2DC2@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB3314@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D8115@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB3DD9@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D8208@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB4111@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D83B3@SHSMSX105.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FB422A@SHSMSX104.ccr.corp.intel.com> Thanks, Vivian. Let’s put this discussion thread back to mailing list. Do you expect StarlingX testing team to do specific testing for these two features? Thx. – cindy _____________________________________________ From: Zhu, Vivian Sent: Thursday, June 27, 2019 10:43 AM To: Xie, Cindy ; Chen, Tingjie ; Fang, Liang A Cc: Wang, Shane Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Yes, if stx.3.0 take openstack Train release, no back-port needs to have QAT compression feature supported. - Vivian SSG OTC NST Storage Tel: (8621)61167437 _____________________________________________ From: Xie, Cindy Sent: Thursday, June 27, 2019 10:42 AM To: Zhu, Vivian >; Chen, Tingjie >; Fang, Liang A > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 My understanding is that stx.3.0 is going to be Train based, thus no back-port will be required, correct? _____________________________________________ From: Zhu, Vivian Sent: Thursday, June 27, 2019 9:31 AM To: Xie, Cindy >; Chen, Tingjie >; Fang, Liang A > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 It depends on starlingx’s preference, usually we don’t want to back-port too many patches to starlingX. We can evaluate how much effort needed to back-port/maintain those patches to StarlingX once we finish the upstreaming in July. Thanks! - Vivian SSG OTC NST Storage Tel: (8621)61167437 _____________________________________________ From: Xie, Cindy Sent: Thursday, June 27, 2019 8:58 AM To: Zhu, Vivian >; Chen, Tingjie > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Good to know that both of these two features will be in stx.3.0. For QAT support in Cinder & Glance, are you saying that we will not have patches on StarlingX, but only by picking up Openstack Train, then we will have this feature in stx.3.0? Thx. - cindy _____________________________________________ From: Zhu, Vivian Sent: Thursday, June 27, 2019 8:54 AM To: Xie, Cindy >; Chen, Tingjie > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Cindy, Both of those two features are targeting to stx3.0. We spend much time on bug fixing to freeze stx2.0 which impact a bit of ceph containerization BP, we will chase it back. - Ceph containerization Tingjie is cooking the BP and plan to have a POC firstly to review by Brent. It is in the highest priority feature in storage team and targeting to stx3.0. - QAT support in Cinder and Glance. Liang is responsible for this feature. The code development is on track and plan to submit to community review this or next week. Plan to finish code merging in July and include in Train release. if Stx.3.0 plan to pick up Train release. This feature has no doubt to be missed. BTW, train release is around Oct.16. - Vivian SSG OTC NST Storage Tel: (8621)61167437 -----Original Message----- From: Xie, Cindy Sent: Wednesday, June 26, 2019 11:04 PM To: Zhu, Vivian >; Chen, Tingjie > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Vivian/Tingjie, We discussed stx.3.0 feature which came from storage team: - Ceph containerization - QAT support in Cinder and Glance. Brent & Saul do have concern regarding to the short window for 3.0 and suggest that we plan accordingly with resource feasible plan. I am not sure how you prioritize the above two items, I will put Ceph containerization in higher priority but you have the call. The suggestion from Brent is to do the work in phases: we may not able to have full feature enabled, but we can have part of functionality in and testable. Let me know if you can have a plan in place so that we can meet stx.3.0 MS3 (mid of Oct). Thx. - cindy -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, June 26, 2019 9:44 PM To: starlingx-discuss at lists.starlingx.io; Rowsell, Brent >; Wold, Saul > Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Agenda for 6/26 meeting: - stx.3.0 feature proposal (All) - Redfish support (https://storyboard.openstack.org/#!/story/2005861) - Python2to3 transition (https://wiki.openstack.org/wiki/StarlingX/Python2, story to be created) - non-Openstack patch cleanup (TBD) - Ceph containerization (https://storyboard.openstack.org/#!/story/2005527, spec under review: https://review.opendev.org/#/c/656371/), sizing the effort before we commit the feature into 3.0 timeline. Propose staging option. - Support Kata container (story to be created), not sure yet if this fits into 3.0. first step is to propose TSC and initiate the discussion. Best to put the discussion/proposal onto the mailing list. - QAT support in Cinder & Glance (story to be created). AR to Cindy to discussion w/ Vivian for details. - systemd standardization. Saul: found from multi-OS effort, systemd was used to launch services, there is some services used sysInv instead of systemd. Needs to standarization to use systemd and move away from hybrid mode.AR to Saul and Marcela to send the technical proposal to mailing list and create SB. Each team members can bring up your proposed work items so that we can bring up to TSC for discussion & approval. - Ceph test status report (Abraham/Fernando) test status tracking: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1145711595 16 pass, 6 blocked, 3 retest 2 patches to address task 30351 under SB#2003909 have been merged yesterday. 4 P1 tests shall be unblocked. AR: Abraham/Fernando to test the 4 P1 cases with the latest build. - QAT test status report (Ricardo) test status tracking: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=84126711 Moving to embedded QAT devices but not yet replicate the success on PCIe card yet. Will continue to preopare the HW and continue the testing. Already have instructions from Shuicheng, expect to finish the setup by this week. RESTAPI disable/enable - pending Numan's advice for howto. - stx.2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin; Bin) - 1829844 : patch available, under review - 1829855 , patch WIP will upload today. - 1830191 , coding done, working on dev testing will post for review tomorrow. - 1831300 , WIP and need retest. One problem is that we are not on latest mimic version. -1830938, maybe an upstream Ceph issue, cherry pick might be required. -1832854, under debug, root cause not clear yet. -1827258 /1832647, kernel log checked, when OOM found the system allocated >14G huge page memory but the whole system memory is only 16G. From Bin's analysis, there are 7000 huge pages (each huge page size is 2M).When OOM happens, the system was attempt to allocate 4K pages but failed. From Brent, the system reserve 14.5G for system usage, only if all those 14.5G are used up, then OOM will happen. - Opens (all) -----Original Message----- From: Xie, Cindy Sent: Tuesday, June 25, 2019 9:50 PM To: starlingx-discuss at lists.starlingx.io; Rowsell, Brent >; Wold, Saul > Subject: RE: Weekly StarlingX non-OpenStack distro meeting Agenda for 6/26 meeting: - stx.3.0 feature proposal (All) - Ceph test status report (Abraham/Fernando) - QAT test status report (Ricardo) - stx.2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin; Bin) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 26, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Thu Jun 27 09:01:57 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Thu, 27 Jun 2019 17:01:57 +0800 Subject: [Starlingx-discuss] STX 1.0 automatically shutdown and power off. In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EC255E617@ALA-MBD.corp.ad.wrs.com> References: <9700A18779F35F49AF027300A49E7C765FEF714C@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7314@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7810@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FEF7D6E@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608AD688@SHSMSX105.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EC255E617@ALA-MBD.corp.ad.wrs.com> Message-ID: Dear all, Update new information: >From IPMI Dashboard, i found the component "Inlet2" over temperature. After i cool down this component with overnight testing, the issue haven't happened again. But the main problem is that i can't found any log or information about *why my StarlingX system shutdown*? If i install bare mental CentOS 7.6 with Kolla-Openstack and even the component "Inlet2" is over temperature, my system won't shutdown by itself. Thanks a lot. Rowsell, Brent 於 2019年6月26日 週三 上午12:04寫道: > > > What type of server is this ? > > > > Brent > > > > *From:* Lin, Shuicheng [mailto:shuicheng.lin at intel.com > ] > *Sent:* Tuesday, June 25, 2019 1:23 AM > *To:* Ezpeer Chen > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] STX 1.0 automatically shutdown and > power off. > > > > Hi Ezpeer, > > I assume you could create VM without issue if not configure pci-sriov. > > But for the pci-sriov issue, I don’t have much idea. > > Maybe you could have a try with another HW, or have a try with STX latest > ISO. > > > > > > Best Regards > > Shuicheng > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com ] > *Sent:* Tuesday, June 25, 2019 10:39 AM > *To:* Lin, Shuicheng > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] STX 1.0 automatically shutdown and > power off. > > > > > Dear Shuicheng, > > How could i know what cause my system shutdown? > > ================================================= > controller-0:~$ cat /etc/build.info > SW_VERSION="18.10" > BUILD_TARGET="Unknown" > BUILD_TYPE="Informal" > BUILD_ID="n/a" > > JOB="n/a" > BUILD_BY="builder" > BUILD_NUMBER="n/a" > BUILD_HOST="258041cdd9ff" > BUILD_DATE="2018-11-10 23:06:44 +0000" > > BUILD_DIR="/" > WRS_SRC_DIR="/localdisk/designer/builder/2018.10_src/cgcs-root" > WRS_GIT_BRANCH="HEAD" > CGCS_SRC_DIR="/localdisk/designer/builder/2018.10_src/cgcs-root/stx" > CGCS_GIT_BRANCH="HEAD" > controller-0:~$ > ================================================ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bin.yang at intel.com Thu Jun 27 09:45:25 2019 From: bin.yang at intel.com (Yang, Bin) Date: Thu, 27 Jun 2019 17:45:25 +0800 Subject: [Starlingx-discuss] LP1827258 (OOM on compute node) analysis Message-ID: <20190627094525.GB10733@desktop-xfce4> Hi, LP1827258 [0] is highlighted that "order zero" (4KB) allocation causes OOM. It looks unreasonable that 4KB allocation was failed. Here we have two problems: 1. Why 4KB memory allocation causes the OOM 2. Why system has no enough memory 1. Why 4KB memory allocation causes the OOM ======================================== >From compute-1_20190507.124154/var/log/kern.log [1] [ 1515.471830] calico-node invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=999 ... [ 1515.471950] Node 0 DMA free:15848kB min:60kB low:72kB high:88kB [ 1515.471954] lowmem_reserve[]: 0 2800 15838 15838 [ 1515.471956] Node 0 DMA32 free:63620kB min:11480kB low:14348kB high:17220kB [ 1515.471959] lowmem_reserve[]: 0 0 13038 13038 [ 1515.471961] Node 0 Normal free:53148kB min:53464kB low:66828kB high:80196kB [ 1515.471964] lowmem_reserve[]: 0 0 0 0 ... [ 1515.472142] Out of memory: Kill process 60356 (kubernetes-entr) score 1000 or sacrifice child gfp_mask=0x201da: it means the page allocation is from ZONE_NORMAL But Node 0 Normal, free:53148kB < min:53464kB: it has no enough space Conclusion: ******************************************************* * min is the oom watermark * * Since the free < min, kernel will start oom killer. * ******************************************************* We can find the kernel code for this logic: ------------------------------------------ mm/page_alloc.c: __zone_watermark_ok() /* * Check watermarks for an order-0 allocation request. If these * are not met, then a high-order request also cannot go ahead * even if a suitable page happened to be free. */ if (free_pages <= min + z->lowmem_reserve[classzone_idx]) return false; /* If this is an order-0 request then the watermark is fine */ if (!order) return true; And here is a related document: https://www.kernel.org/doc/Documentation/sysctl/vm.txt min_free_kbytes: --------------- This is used to force the Linux VM to keep a minimum number of kilobytes free. The VM uses this number to compute a watermark[WMARK_MIN] value for each lowmem zone in the system. Each lowmem zone gets a number of reserved free pages based proportionally on its size. Some minimal amount of memory is needed to satisfy PF_MEMALLOC allocations; if you set this to lower than 1024KB, your system will become subtly broken, and prone to deadlock under high loads. Setting this too high will OOM your machine instantly. And the watermarks are setup base on this kernel parameter: ---------------------------------------------------------- mm/page_alloc.c init_per_zone_wmark_min() Conclusion: *************************************************************************** * The default setting is calculated based on your total system memory size. * I think min watermark = 53464kB for 16GB is reasonable. *************************************************************************** 2. Why system has no enough memory ================================== from compute-1_20190507.124154/var/log/kern.log [1] 2019-05-06T16:24:12.266 localhost kernel: debug [ 0.000000] On node 0 totalpages: 4174118 ... 2019-05-06T17:28:56.749 compute-1 kernel: info [ 1515.471986] Node 0 hugepages_total=1 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB 2019-05-06T17:28:56.749 compute-1 kernel: info [ 1515.471987] Node 0 hugepages_total=6807 hugepages_free=6807 hugepages_surp=0 hugepages_size=2048kB ... from hieradata/192.168.204.77.yaml [1]: platform::compute::hugepage::params::vm_2M_pages: '"7024,7172"' ... platform::compute::params::worker_base_reserved: ("node0:8000MB:1" "node1:2000MB:1") from puppet.log [0] ... Exec[Allocate 7024 /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages] ... Exec[Allocate 7172 /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages] ... The total memory on node 0 is 16GB; >From kenrel log, 2M hugepage is 6807 which is smaller than 7024. It looks system has no 7024 2M pages But total expected hugepage size is: 7024*2M + 1G = 14.7GB It is not reasonable. We have several reserved memory resources as below: 1. 8GB reserved by worker_reserved.conf: WORKER_BASE_RESERVED=("node0:8000MB:1" "node1:2000MB:1") 2. 10% reserved by below code: sysinv host.py: vm_hugepages_nr_2M = int(m.vm_hugepages_possible_2M * 0.9) After code review, it looks the hugepage allocation check code has a problem. [2] It only check whether the total allocation memory is bigger than total node memory size. If the pending hugepage size is between max possible size and node total size, the check will be passed. By default, if no pending hugepage request, _update_huge_pages() will allocate (m.vm_hugepages_possible_2M*0.9) 2M hugepage. But vm_hugepages_nr_2M_pending will be used with priority. If user config 2M hugepage manually with a wrong size, this issue will be triggered. Then, the hugepage allocation size will be overflow. The normal 4K pages will not be enough. So the OOM min watermark will be hit. A patch had been submitted for review. [2] [0] https://bugs.launchpad.net/starlingx/+bug/1827258/ [1] https://bugs.launchpad.net/starlingx/+bug/1827258/+attachment/5262103/+files/LP1827258.tar [2] https://review.opendev.org/#/c/667811/1/sysinv/sysinv/sysinv/sysinv/api/controllers/v1/host.py From marcel at schaible-consulting.de Thu Jun 27 13:45:53 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 27 Jun 2019 15:45:53 +0200 (CEST) Subject: [Starlingx-discuss] High availability for instances Message-ID: <667007265.5044.1561643153998@webmail.strato.de> Hi, we have two compute nodes with 1 instance of a vm each running an application. Because thise application should be available 24/7 we want test a fail over from one compute node to the other. I found an user Story "High Availability for Virtual Machines" (http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/ha_vm.html). Is this sub-Project still activate and what is the current state? Any other idea how to accomplish a fail over with the upcoming relase of starlingx? Thanks! Marcel From cindy.xie at intel.com Thu Jun 27 05:31:07 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 27 Jun 2019 05:31:07 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FB422A@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FB2DC2@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB3314@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D8115@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB3DD9@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D8208@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB4111@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D83B3@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB422A@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FB4438@SHSMSX104.ccr.corp.intel.com> Brent, I noticed that we have a storyboard [1] approved for stx.3.0 from Matt. By reading the spec [2], I am thinking this might fall into non-openstack-dist project scope. Can I volunteer to manage the progress in the community? Thx. - cindy [1] https://storyboard.openstack.org/#!/story/2005733 [2] https://review.opendev.org/#/c/665208/ From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, June 27, 2019 11:27 AM To: Zhu, Vivian ; Chen, Tingjie ; Fang, Liang A Cc: Wang, Shane ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Thanks, Vivian. Let’s put this discussion thread back to mailing list. Do you expect StarlingX testing team to do specific testing for these two features? Thx. – cindy _____________________________________________ From: Zhu, Vivian Sent: Thursday, June 27, 2019 10:43 AM To: Xie, Cindy >; Chen, Tingjie >; Fang, Liang A > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Yes, if stx.3.0 take openstack Train release, no back-port needs to have QAT compression feature supported. - Vivian SSG OTC NST Storage Tel: (8621)61167437 _____________________________________________ From: Xie, Cindy Sent: Thursday, June 27, 2019 10:42 AM To: Zhu, Vivian >; Chen, Tingjie >; Fang, Liang A > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 My understanding is that stx.3.0 is going to be Train based, thus no back-port will be required, correct? _____________________________________________ From: Zhu, Vivian Sent: Thursday, June 27, 2019 9:31 AM To: Xie, Cindy >; Chen, Tingjie >; Fang, Liang A > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 It depends on starlingx’s preference, usually we don’t want to back-port too many patches to starlingX. We can evaluate how much effort needed to back-port/maintain those patches to StarlingX once we finish the upstreaming in July. Thanks! - Vivian SSG OTC NST Storage Tel: (8621)61167437 _____________________________________________ From: Xie, Cindy Sent: Thursday, June 27, 2019 8:58 AM To: Zhu, Vivian >; Chen, Tingjie > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Good to know that both of these two features will be in stx.3.0. For QAT support in Cinder & Glance, are you saying that we will not have patches on StarlingX, but only by picking up Openstack Train, then we will have this feature in stx.3.0? Thx. - cindy _____________________________________________ From: Zhu, Vivian Sent: Thursday, June 27, 2019 8:54 AM To: Xie, Cindy >; Chen, Tingjie > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Cindy, Both of those two features are targeting to stx3.0. We spend much time on bug fixing to freeze stx2.0 which impact a bit of ceph containerization BP, we will chase it back. - Ceph containerization Tingjie is cooking the BP and plan to have a POC firstly to review by Brent. It is in the highest priority feature in storage team and targeting to stx3.0. - QAT support in Cinder and Glance. Liang is responsible for this feature. The code development is on track and plan to submit to community review this or next week. Plan to finish code merging in July and include in Train release. if Stx.3.0 plan to pick up Train release. This feature has no doubt to be missed. BTW, train release is around Oct.16. - Vivian SSG OTC NST Storage Tel: (8621)61167437 -----Original Message----- From: Xie, Cindy Sent: Wednesday, June 26, 2019 11:04 PM To: Zhu, Vivian >; Chen, Tingjie > Cc: Wang, Shane > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Vivian/Tingjie, We discussed stx.3.0 feature which came from storage team: - Ceph containerization - QAT support in Cinder and Glance. Brent & Saul do have concern regarding to the short window for 3.0 and suggest that we plan accordingly with resource feasible plan. I am not sure how you prioritize the above two items, I will put Ceph containerization in higher priority but you have the call. The suggestion from Brent is to do the work in phases: we may not able to have full feature enabled, but we can have part of functionality in and testable. Let me know if you can have a plan in place so that we can meet stx.3.0 MS3 (mid of Oct). Thx. - cindy -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, June 26, 2019 9:44 PM To: starlingx-discuss at lists.starlingx.io; Rowsell, Brent >; Wold, Saul > Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Agenda for 6/26 meeting: - stx.3.0 feature proposal (All) - Redfish support (https://storyboard.openstack.org/#!/story/2005861) - Python2to3 transition (https://wiki.openstack.org/wiki/StarlingX/Python2, story to be created) - non-Openstack patch cleanup (TBD) - Ceph containerization (https://storyboard.openstack.org/#!/story/2005527, spec under review: https://review.opendev.org/#/c/656371/), sizing the effort before we commit the feature into 3.0 timeline. Propose staging option. - Support Kata container (story to be created), not sure yet if this fits into 3.0. first step is to propose TSC and initiate the discussion. Best to put the discussion/proposal onto the mailing list. - QAT support in Cinder & Glance (story to be created). AR to Cindy to discussion w/ Vivian for details. - systemd standardization. Saul: found from multi-OS effort, systemd was used to launch services, there is some services used sysInv instead of systemd. Needs to standarization to use systemd and move away from hybrid mode.AR to Saul and Marcela to send the technical proposal to mailing list and create SB. Each team members can bring up your proposed work items so that we can bring up to TSC for discussion & approval. - Ceph test status report (Abraham/Fernando) test status tracking: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1145711595 16 pass, 6 blocked, 3 retest 2 patches to address task 30351 under SB#2003909 have been merged yesterday. 4 P1 tests shall be unblocked. AR: Abraham/Fernando to test the 4 P1 cases with the latest build. - QAT test status report (Ricardo) test status tracking: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=84126711 Moving to embedded QAT devices but not yet replicate the success on PCIe card yet. Will continue to preopare the HW and continue the testing. Already have instructions from Shuicheng, expect to finish the setup by this week. RESTAPI disable/enable - pending Numan's advice for howto. - stx.2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin; Bin) - 1829844 : patch available, under review - 1829855 , patch WIP will upload today. - 1830191 , coding done, working on dev testing will post for review tomorrow. - 1831300 , WIP and need retest. One problem is that we are not on latest mimic version. -1830938, maybe an upstream Ceph issue, cherry pick might be required. -1832854, under debug, root cause not clear yet. -1827258 /1832647, kernel log checked, when OOM found the system allocated >14G huge page memory but the whole system memory is only 16G. From Bin's analysis, there are 7000 huge pages (each huge page size is 2M).When OOM happens, the system was attempt to allocate 4K pages but failed. From Brent, the system reserve 14.5G for system usage, only if all those 14.5G are used up, then OOM will happen. - Opens (all) -----Original Message----- From: Xie, Cindy Sent: Tuesday, June 25, 2019 9:50 PM To: starlingx-discuss at lists.starlingx.io; Rowsell, Brent >; Wold, Saul > Subject: RE: Weekly StarlingX non-OpenStack distro meeting Agenda for 6/26 meeting: - stx.3.0 feature proposal (All) - Ceph test status report (Abraham/Fernando) - QAT test status report (Ricardo) - stx.2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin; Bin) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, June 26, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Jun 27 15:05:46 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 27 Jun 2019 11:05:46 -0400 Subject: [Starlingx-discuss] Docker image list In-Reply-To: References: Message-ID: IS this what you are looking for? http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/images-centos-stable-latest.lst On 2019-06-24 9:20 a.m., Sun, Austin wrote: > > Echo this request. Publishing a list of images per release is really > helping a lot of developers  especially  who is working behind proxy. > > Thanks. > > BR > Austin Sun. > > *From:*Curtis [mailto:serverascode at gmail.com] > *Sent:* Monday, June 24, 2019 7:39 PM > *To:* Li, Cheng1 > *Cc:* starlingx-discuss at lists.starlingx.io; Xu, Chenjie > > *Subject:* Re: [Starlingx-discuss] Docker image list > > On Mon, Jun 24, 2019 at 2:45 AM Li, Cheng1 > wrote: > > Hello Starlingxer, > > As you know many docker images are pulled during starlingx > deployment. It may be fast to pull all these images in America, > but it’s very slow in China. > > To speed up starlingx deployment, I have set up a private docker > registry for which I sync images every night from upstream > registries by cron job. > > I installed starlingx without using my private docker registry so > that I can get the upstream docker image list. Every item in the > list is synced every day. > > This works fine except that the docker image list changes > sometimes. In this case, I would have to collect the image list by > deploying without using private docker registry, which is very slow. > > So I wonder if it’s possible to publish the docker image list file > together with ISO and tarball on CENGN. I know we do sanity tests > for each ISO, maybe we can run ‘docker images’ in sanity test to > collect the docker image list? > > I'd love to see the same thing. For the workshop we did at the last > summit I would have to deploy, then note what images were deployed, > then download them all. Not automatable. If we could publish a list of > images per release that would help immensely. :) > > Thanks, > > Curtis > > Thanks, > > Cheng > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Thu Jun 27 15:11:25 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 27 Jun 2019 15:11:25 +0000 Subject: [Starlingx-discuss] Docker image list In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC14F6D79@ALA-MBD.corp.ad.wrs.com> No, this is the list of images we build. The request is for a list of non-starlingx-built images that are pulled at runtime. From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, June 27, 2019 11:06 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docker image list IS this what you are looking for? http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/images-centos-stable-latest.lst On 2019-06-24 9:20 a.m., Sun, Austin wrote: Echo this request. Publishing a list of images per release is really helping a lot of developers especially who is working behind proxy. Thanks. BR Austin Sun. From: Curtis [mailto:serverascode at gmail.com] Sent: Monday, June 24, 2019 7:39 PM To: Li, Cheng1 Cc: starlingx-discuss at lists.starlingx.io; Xu, Chenjie Subject: Re: [Starlingx-discuss] Docker image list On Mon, Jun 24, 2019 at 2:45 AM Li, Cheng1 > wrote: Hello Starlingxer, As you know many docker images are pulled during starlingx deployment. It may be fast to pull all these images in America, but it’s very slow in China. To speed up starlingx deployment, I have set up a private docker registry for which I sync images every night from upstream registries by cron job. I installed starlingx without using my private docker registry so that I can get the upstream docker image list. Every item in the list is synced every day. This works fine except that the docker image list changes sometimes. In this case, I would have to collect the image list by deploying without using private docker registry, which is very slow. So I wonder if it’s possible to publish the docker image list file together with ISO and tarball on CENGN. I know we do sanity tests for each ISO, maybe we can run ‘docker images’ in sanity test to collect the docker image list? I'd love to see the same thing. For the workshop we did at the last summit I would have to deploy, then note what images were deployed, then download them all. Not automatable. If we could publish a list of images per release that would help immensely. :) Thanks, Curtis Thanks, Cheng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Jun 27 15:43:34 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 27 Jun 2019 15:43:34 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features Message-ID: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the "Not yet discussed" features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo'd @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Jun 27 15:57:56 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 27 Jun 2019 15:57:56 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC2565E0F@ALA-MBD.corp.ad.wrs.com> Bruce, Thanks. A couple of comments/questions. 1) What's the difference between FPGA accelerator support and k8s fpga device plugin. As discussed at the TSC two wks ago, I have a dev that will be doing a spec for the later. 2) Containerized CEPH. It would be good to break this into two specs I think, one for prep content (R3) and one to complete the integration 3) What do we mean by Lead ? Spec owner ? Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 11:44 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the "Not yet discussed" features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo'd @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Thu Jun 27 16:13:40 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Thu, 27 Jun 2019 16:13:40 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 06/24/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CE36C0D@FMSMSX114.amr.corp.intel.com> Agenda for 06/24 Attendees: Elio, Richo, Al, Critopher, JC, Maria P, Numan, Yang, Bruce 1. Regression testing status - Elio, Numan Tracker - https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=322455033 MS-3 declared on 06/22 ISO for regression testing is from 06/21 Intel - 110 tests executed manually (began last week), 7 failures. Automated regression to begin today. Plan is to execute one domain per day, in 4 configs. Running on virtual, due to tests cases were created in this env and we are having problems running them on bare metal. Wind River - automated regression ran this weekend - results to be updated, launchpads in progress. Some failures in nova for live migration. Reports to be sent Tuesday and Thursday. Maria P to send the first report today. Goal is to finish by last week of July. Update the title (in the tracker) and status of the bugs found. Ask Ghada for creating a tag for bugs found during regression - stx.regression - Numan 2. Sanity status Two greens in a row. Today's looks good so far. The deployment times continue to increase Numan to send info about their setup and deployment times. Cristopher to collect the info from proxy config. Unlocking master controller is taking more and more time. Track this time. 3. Testing framework - Erich finished the review Elio and JC - please review the code. wrsroot patch is going to be submitted this week. For history - it doesn't make test to include it on this commit. The testing framework is a parallel project, it doesn't have impact on the final stx product. Saul - is there a deployment capability included into the testing framework, or covers the testing only? Yang - is for running the tests in an already configured system. How to address issues found in the framework (launchpads?) Launchpads will be the way of reporting, however, we have to make sure of not affecting stx statistics. Proposal is to a use prefix [stx-test] into the title and also, use a tag. For new features/tests, use storyboard, as the project does. Ada to begin a mail thread for notifying this. Yang - on vacations starting this Friday. 4. Feature testing status - Total/pass/fail/block Containers Ada - 75/40/1/2 Ask to be updated offline. Numan - 82/58/3/0 No update. OpenStack patch elimination Ada - 50/21/0/6 Testing has been resumed. Latests ISO includes features that were blocking execution. Numan - 8/4/4/0 Retesting launchpads. Centos 7.6 - QAT - 12/9/0/0 10 executed - 1 pending (requires image with QAT driver). REST API pending - help required to define the steps. Numan to check with Cris. Containerized OVS - 15/8/0/0 In progress - results to be updated soon. 50% already done. No problems identified so far. Ceph upgrade - 26/15/0/7 6 blocked, 3 for retesting, 1 deferred - storyboard to be created. i/o test to be run soon. 5. Opens Numan - some Intel people is going to WR for training. - This will be on late August. Request is in approval loop. From bruce.e.jones at intel.com Thu Jun 27 16:16:18 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 27 Jun 2019 16:16:18 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EC2565E0F@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EC2565E0F@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BD07787EE@fmsmsx123.amr.corp.intel.com> Thank you, Brent for your review and questions. 1) FPGA accelerator support is for OpenStack (e.g. Cyborg integration) 2) Agree that the Containerized Ceph spec can/should be split 3) "Lead" is the person responsible internally for getting the work done and is the contact for any questions about the feature. It may or may not be the person who writes the spec. brucej From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, June 27, 2019 8:58 AM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 3.0 features Bruce, Thanks. A couple of comments/questions. 1) What's the difference between FPGA accelerator support and k8s fpga device plugin. As discussed at the TSC two wks ago, I have a dev that will be doing a spec for the later. 2) Containerized CEPH. It would be good to break this into two specs I think, one for prep content (R3) and one to complete the integration 3) What do we mean by Lead ? Spec owner ? Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 11:44 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the "Not yet discussed" features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo'd @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Thu Jun 27 16:44:27 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 27 Jun 2019 16:44:27 +0000 Subject: [Starlingx-discuss] Docker image list In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC14F6D79@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FC14F6D79@ALA-MBD.corp.ad.wrs.com> Message-ID: Here is a Launchpad so that this activity does not get forgotten, in case anyone wants to pursue it https://bugs.launchpad.net/starlingx/+bug/1834504 Al From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, June 27, 2019 11:11 AM To: Little, Scott; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docker image list No, this is the list of images we build. The request is for a list of non-starlingx-built images that are pulled at runtime. From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, June 27, 2019 11:06 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docker image list IS this what you are looking for? http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/images-centos-stable-latest.lst On 2019-06-24 9:20 a.m., Sun, Austin wrote: Echo this request. Publishing a list of images per release is really helping a lot of developers especially who is working behind proxy. Thanks. BR Austin Sun. From: Curtis [mailto:serverascode at gmail.com] Sent: Monday, June 24, 2019 7:39 PM To: Li, Cheng1 Cc: starlingx-discuss at lists.starlingx.io; Xu, Chenjie Subject: Re: [Starlingx-discuss] Docker image list On Mon, Jun 24, 2019 at 2:45 AM Li, Cheng1 > wrote: Hello Starlingxer, As you know many docker images are pulled during starlingx deployment. It may be fast to pull all these images in America, but it’s very slow in China. To speed up starlingx deployment, I have set up a private docker registry for which I sync images every night from upstream registries by cron job. I installed starlingx without using my private docker registry so that I can get the upstream docker image list. Every item in the list is synced every day. This works fine except that the docker image list changes sometimes. In this case, I would have to collect the image list by deploying without using private docker registry, which is very slow. So I wonder if it’s possible to publish the docker image list file together with ISO and tarball on CENGN. I know we do sanity tests for each ISO, maybe we can run ‘docker images’ in sanity test to collect the docker image list? I'd love to see the same thing. For the workshop we did at the last summit I would have to deploy, then note what images were deployed, then download them all. Not automatable. If we could publish a list of images per release that would help immensely. :) Thanks, Curtis Thanks, Cheng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Thu Jun 27 17:32:02 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 27 Jun 2019 12:32:02 -0500 Subject: [Starlingx-discuss] [stx-nova] Stein branch rebase In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C153DA55@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C153DA55@ALA-MBD.corp.ad.wrs.com> Message-ID: On Wed, Jun 19, 2019 at 6:26 PM Khalil, Ghada wrote: > FYI. The external placement service code merged earlier today: > https://review.opendev.org/#/c/662614/ > https://review.opendev.org/#/c/662371/ And to close the loop, the Github PR is merged, stx-nova branch stx/stein.2 is now ready to be pulled for further testing. dt -- Dean Troyer dtroyer at gmail.com From Frank.Miller at windriver.com Thu Jun 27 18:30:29 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 27 Jun 2019 18:30:29 +0000 Subject: [Starlingx-discuss] [stx-nova] Stein branch rebase In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0C153DA55@ALA-MBD.corp.ad.wrs.com> Message-ID: Thanks for the update Dean. We'll need our nova docker image to be rebuilt to pick up this branch. The docker images are built weekly on Monday nights. Don can you update the appropriate file to point to this new branch? Frank -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, June 27, 2019 1:32 PM To: starlingx Subject: Re: [Starlingx-discuss] [stx-nova] Stein branch rebase On Wed, Jun 19, 2019 at 6:26 PM Khalil, Ghada wrote: > FYI. The external placement service code merged earlier today: > https://review.opendev.org/#/c/662614/ > https://review.opendev.org/#/c/662371/ And to close the loop, the Github PR is merged, stx-nova branch stx/stein.2 is now ready to be pulled for further testing. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Thu Jun 27 19:54:55 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 27 Jun 2019 19:54:55 +0000 Subject: [Starlingx-discuss] EdgeX deeper integration? Message-ID: <9A85D2917C58154C960D95352B22818BD0778EFA@fmsmsx123.amr.corp.intel.com> We had an internal discussion today about EdgeX. We are seeing signs of it increasing in use and importance in the Edge ecosystem. It is fairly straightforward to build and run an EdgeX application under StarlingX today. We had it running in the Intel booth at the Denver Summit. My question for the community is this: Is there value or interest in making EdgeX apps even easier to run within StarlingX? For example, we could create an EdgeX application in StarlingX and allow users to apply it to the system, to allow the EdgeX services to run and be managed by StarlingX. This would add some ease of use benefits for EdgeX users while also putting us in the position of maintaining an up to date version of EdgeX. Is this something we should work on as a community? brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dariush.Eslimi at windriver.com Thu Jun 27 20:32:20 2019 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Thu, 27 Jun 2019 20:32:20 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 References: <2FD5DDB5A04D264C80D42CA35194914F35FB2DC2@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB3314@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D8115@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB3DD9@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D8208@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB4111@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D83B3@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB422A@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB4438@SHSMSX104.ccr.corp.intel.com> Message-ID: Resend with history cleanup as message reached the 40K limit. From: Eslimi, Dariush Sent: June-27-19 4:29 PM To: 'Xie, Cindy' ; Rowsell, Brent ; Peters, Matt ; Wold, Saul Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Cindy, This is mostly in stx-config and to some extent containers (as we use the app framework for packaging and installation), we have started the implementation and few code reviews are already posted by John and Kevin as WIP that are under code review and waiting for spec approval. I will be leading this effort and plan to have a demo on how far we have come and to collect the community feedback. Thanks, Dariush From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: June-27-19 1:31 AM To: Rowsell, Brent >; Peters, Matt >; Wold, Saul > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Brent, I noticed that we have a storyboard [1] approved for stx.3.0 from Matt. By reading the spec [2], I am thinking this might fall into non-openstack-dist project scope. Can I volunteer to manage the progress in the community? Thx. - cindy [1] https://storyboard.openstack.org/#!/story/2005733 [2] https://review.opendev.org/#/c/665208/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Thu Jun 27 20:31:38 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 27 Jun 2019 20:31:38 +0000 Subject: [Starlingx-discuss] [stx-nova] Stein branch rebase In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0C153DA55@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC14F719B@ALA-MBD.corp.ad.wrs.com> Hi Frank, This update should be done by whoever is responsible for the change. It requires updating the image directives file to reference the new branch as the PROJECT_REF value: https://opendev.org/starlingx/upstream/src/branch/master/openstack/python-nova/centos/stx-nova.stable_docker_image#L5 Instructions on building images can be found here: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Image_Build_Command If I wanted to test the build of this updated directives file, I can use the latest CENGN-built base and wheels, and run: BUILD_STREAM=stable BRANCH=master CENTOS_BASE=starlingx/stx-centos:${BRANCH}-${BUILD_STREAM}-latest WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-${BUILD_STREAM}-wheels.tar time $MY_REPO/build-tools/build-docker-images/build-stx-images.sh \ --stream ${BUILD_STREAM} \ --base ${CENTOS_BASE} \ --wheels ${WHEELS} \ --only stx-nova Cheers, Don. -----Original Message----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Thursday, June 27, 2019 2:30 PM To: Dean Troyer; starlingx Subject: Re: [Starlingx-discuss] [stx-nova] Stein branch rebase Thanks for the update Dean. We'll need our nova docker image to be rebuilt to pick up this branch. The docker images are built weekly on Monday nights. Don can you update the appropriate file to point to this new branch? Frank -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, June 27, 2019 1:32 PM To: starlingx Subject: Re: [Starlingx-discuss] [stx-nova] Stein branch rebase On Wed, Jun 19, 2019 at 6:26 PM Khalil, Ghada wrote: > FYI. The external placement service code merged earlier today: > https://review.opendev.org/#/c/662614/ > https://review.opendev.org/#/c/662371/ And to close the loop, the Github PR is merged, stx-nova branch stx/stein.2 is now ready to be pulled for further testing. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Thu Jun 27 20:32:27 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 27 Jun 2019 20:32:27 +0000 Subject: [Starlingx-discuss] Feature backlog tracking proposal Message-ID: <9A85D2917C58154C960D95352B22818BD0778FA8@fmsmsx123.amr.corp.intel.com> At the TSC meeting today we discussed the release policy doc review in the governance repo [0] and Ian's feedback in that review. He's asked us to think about how to track work items for future releases. I volunteered to take the AR to the Release team, and we discussed it in the team meeting today. In the meeting we reviewed several options for feature tracking: * Jira - Pros: well known, full featured. Cons: Not open source, not integrated with opendev infra * Trello - Pros: free, full featured, light and easy to use Cons: Not open source, not integrated with opendev infra * Storyboard - Pros: part of opendev infra, already in use. Cons: Missing key features * Launchpad bugs - Pros: part of opendev infra, already in use. Cons: Already used for bugs, not really suited for feature tracking * Launchpad blueprints - Pros: part of opendev infra, has key features (priority, approval, owner, tracking). Cons: Not yet configured for StarlingX In the past we have used etherpads, ethercalcs and Google docs for this kind of work as well. None of those fully met our needs. I recommended to the Release team today that we adopt Launchpad Blueprints for feature tracking. This would solve several problems for the project - it would give us a way to track features, prioritize features against each other, formally approve feature content, and give the project a public work item backlog. Blueprints are used by several OpenStack teams and seem ideally suited for what we need. In fact, during the meeting we enabled the use of Blueprints for StarlingX and created a test BP. You can find it here [1]. If you want to see how other projects use Blueprints you can see an example from Nova here [2]. I've put this on the agenda for the next community meeting so we can discuss the proposal more broadly. In the meantime, the tool is available to the project. There are some policy and process issues that need to be resolved, and it may need some configuration changes. So please don't put your whole backlog into it yet, but feel free to try it out. Everyone who has access to our Launchpad bugs should have access to the Blueprints. brucej [0] https://review.opendev.org/#/c/658499/ [1] https://blueprints.launchpad.net/starlingx [2] https://blueprints.launchpad.net/nova -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dariush.Eslimi at windriver.com Thu Jun 27 20:35:55 2019 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Thu, 27 Jun 2019 20:35:55 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> Message-ID: Bruce, RedFish changes are almost all in maintenance area, I do not think we would need Lead per feature in this case, Eric and Zhipeng are already collaborating very close on this. I will be leading this feature to completion. Thanks. Dariush From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: June-27-19 11:44 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the "Not yet discussed" features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo'd @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Thu Jun 27 20:50:19 2019 From: yong.hu at intel.com (Hu, Yong) Date: Thu, 27 Jun 2019 20:50:19 +0000 Subject: [Starlingx-discuss] EdgeX deeper integration? Message-ID: <647E425D-BA6C-425C-8E34-6215553202E7@intel.com> FYI: https://www.edgexfoundry.org Currently it gives a manual way of running up all the micro-services by docker-compose (https://github.com/edgexfoundry/developer-scripts/blob/master/compose-files/docker-compose.yml) on a host or so-called gateway (which is supposedly having docker) based on x86 or arm CPU. With StarlingX, we could install and run all the micro-services and make StarlingX (k8s) manage them. -Yong On 27/06/2019, 12:55 PM, "Jones, Bruce E" > wrote: We had an internal discussion today about EdgeX. We are seeing signs of it increasing in use and importance in the Edge ecosystem. It is fairly straightforward to build and run an EdgeX application under StarlingX today. We had it running in the Intel booth at the Denver Summit. My question for the community is this: Is there value or interest in making EdgeX apps even easier to run within StarlingX? For example, we could create an EdgeX application in StarlingX and allow users to apply it to the system, to allow the EdgeX services to run and be managed by StarlingX. This would add some ease of use benefits for EdgeX users while also putting us in the position of maintaining an up to date version of EdgeX. Is this something we should work on as a community? brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Thu Jun 27 21:00:44 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Thu, 27 Jun 2019 21:00:44 +0000 Subject: [Starlingx-discuss] EdgeX deeper integration? In-Reply-To: <9A85D2917C58154C960D95352B22818BD0778EFA@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD0778EFA@fmsmsx123.amr.corp.intel.com> Message-ID: <8b2eee427bc937014881a13cb45414bfb8c19443.camel@intel.com> Actually, in the ongoing review for the pytest framework there is this setup of EdgeX in k8s. https://review.opendev.org/#/c/665419/3/automated-pytest-suite/testcases/functional/z_containers/test_kube_edgex_services.py This could be a good start point to create an EdgeX application. -Erich On Thu, 2019-06-27 at 19:54 +0000, Jones, Bruce E wrote: > We had an internal discussion today about EdgeX. We are seeing signs > of it increasing in use and importance in the Edge ecosystem. > > It is fairly straightforward to build and run an EdgeX application > under StarlingX today. We had it running in the Intel booth at the > Denver Summit. > > My question for the community is this: Is there value or interest in > making EdgeX apps even easier to run within StarlingX? For example, > we could create an EdgeX application in StarlingX and allow users to > apply it to the system, to allow the EdgeX services to run and be > managed by StarlingX. This would add some ease of use benefits for > EdgeX users while also putting us in the position of maintaining an > up to date version of EdgeX. > > Is this something we should work on as a community? > > brucej > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Thu Jun 27 21:09:51 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 27 Jun 2019 21:09:51 +0000 Subject: [Starlingx-discuss] [stx-nova] Stein branch rebase In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC14F719B@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C153DA55@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FC14F719B@ALA-MBD.corp.ad.wrs.com> Message-ID: Thanks Don. I'll bring this up next week at the weekly distro-openstack call and we'll decide who makes this final change. Frank -----Original Message----- From: Penney, Don Sent: Thursday, June 27, 2019 4:32 PM To: Miller, Frank ; Dean Troyer ; starlingx Subject: RE: [Starlingx-discuss] [stx-nova] Stein branch rebase Hi Frank, This update should be done by whoever is responsible for the change. It requires updating the image directives file to reference the new branch as the PROJECT_REF value: https://opendev.org/starlingx/upstream/src/branch/master/openstack/python-nova/centos/stx-nova.stable_docker_image#L5 Instructions on building images can be found here: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Image_Build_Command If I wanted to test the build of this updated directives file, I can use the latest CENGN-built base and wheels, and run: BUILD_STREAM=stable BRANCH=master CENTOS_BASE=starlingx/stx-centos:${BRANCH}-${BUILD_STREAM}-latest WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-${BUILD_STREAM}-wheels.tar time $MY_REPO/build-tools/build-docker-images/build-stx-images.sh \ --stream ${BUILD_STREAM} \ --base ${CENTOS_BASE} \ --wheels ${WHEELS} \ --only stx-nova Cheers, Don. -----Original Message----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Thursday, June 27, 2019 2:30 PM To: Dean Troyer; starlingx Subject: Re: [Starlingx-discuss] [stx-nova] Stein branch rebase Thanks for the update Dean. We'll need our nova docker image to be rebuilt to pick up this branch. The docker images are built weekly on Monday nights. Don can you update the appropriate file to point to this new branch? Frank -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, June 27, 2019 1:32 PM To: starlingx Subject: Re: [Starlingx-discuss] [stx-nova] Stein branch rebase On Wed, Jun 19, 2019 at 6:26 PM Khalil, Ghada wrote: > FYI. The external placement service code merged earlier today: > https://review.opendev.org/#/c/662614/ > https://review.opendev.org/#/c/662371/ And to close the loop, the Github PR is merged, stx-nova branch stx/stein.2 is now ready to be pulled for further testing. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Thu Jun 27 21:11:04 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 27 Jun 2019 21:11:04 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> Message-ID: Bruce: For the Containerized OpenStack Clients feature, this one was mostly done for stx2.0 and does not need much more effort to complete in stx3.0. As this was tracked in the containerization subproject it makes the most sense for me to continue to track this one. Dean as you were the original author of the openstack client commands, it would be great to have your input for this one once Stefan has completed his testing and the current gerrit review updated. I expect this to occur shortly after the stx3.0 branch is available. Frank From: Eslimi, Dariush [mailto:Dariush.Eslimi at windriver.com] Sent: Thursday, June 27, 2019 4:36 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 3.0 features Bruce, RedFish changes are almost all in maintenance area, I do not think we would need Lead per feature in this case, Eric and Zhipeng are already collaborating very close on this. I will be leading this feature to completion. Thanks. Dariush From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: June-27-19 11:44 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the "Not yet discussed" features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo'd @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Thu Jun 27 21:24:44 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 27 Jun 2019 16:24:44 -0500 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> Message-ID: On Thu, Jun 27, 2019 at 4:12 PM Miller, Frank wrote: > For the Containerized OpenStack Clients feature, this one was mostly done > for stx2.0 and does not need much more effort to complete in stx3.0. As > this was tracked in the containerization subproject it makes the most sense > for me to continue to track this one. Dean as you were the original author > of the openstack client commands, it would be great to have your input for > this one once Stefan has completed his testing and the current gerrit > review updated. I expect this to occur shortly after the stx3.0 branch is > available. > Sure, I'd be glad to. Note that I released the last OSC 3.x (3.19.0) and companion osc-lib a couple of weeks ago, it should be a clean upgrade and includes some much-improved live migration handling that we finally resolved in Denver last month. Next up: OSC4 finally! dt -- Dean Troyer dtroyer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Jun 27 22:58:37 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 27 Jun 2019 22:58:37 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 6/27/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID="20190621T013000Z" from (link) ---------------------------------------------------------------------- Overall Results: Total = 421 Pass = 125 Fail = 5 Blocked = 2 Total executed = 132 Pass Rate = 94.6% ---------------------------------------------------------------------- Results per Domain: Regression - AIO-SX 23 PASS |1 FAIL|2 BLOCKED Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 12 PASS Regression - FM Regression - HA Regression - Heat 10 PASS Regression - Horizon 2 PASS Regression - Install and Config Regression - Maintenance Regression - Networking 39 PASS Regression - Nova Regression - Security 20 PASS | 2 FAIL Regression - Storage Regression - Inventory 19 PASS | 2 FAIL System Test --------------------------------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX : https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted : https://bugs.launchpad.net/starlingx/+bug/1833469 removing attributes from bash.log should not be possible : https://bugs.launchpad.net/starlingx/+bug/1833619 sysadmin user not locked out after 5 wrong password attempts : https://bugs.launchpad.net/starlingx/+bug/1834116 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Jun 27 23:12:03 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 27 Jun 2019 23:12:03 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190627 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jun-27 (link) Status: Green ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Jun 28 00:08:53 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 28 Jun 2019 00:08:53 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35FB2DC2@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB3314@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D8115@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB3DD9@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D8208@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB4111@SHSMSX104.ccr.corp.intel.com> <371DF9A763E9F44F924F4A821FC070264D0D83B3@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB422A@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FB4438@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FB5374@SHSMSX104.ccr.corp.intel.com> Thanks Dariush for the update. It’s good to know that work is moving forward smoothly and you will manage it as community leader. Looking forward to see the demo and we can offer feedbacks. Thanks. - cindy From: Eslimi, Dariush [mailto:Dariush.Eslimi at windriver.com] Sent: Friday, June 28, 2019 4:32 AM To: Xie, Cindy ; Rowsell, Brent ; Peters, Matt ; Wold, Saul Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Resend with history cleanup as message reached the 40K limit. From: Eslimi, Dariush Sent: June-27-19 4:29 PM To: 'Xie, Cindy' >; Rowsell, Brent >; Peters, Matt >; Wold, Saul > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Cindy, This is mostly in stx-config and to some extent containers (as we use the app framework for packaging and installation), we have started the implementation and few code reviews are already posted by John and Kevin as WIP that are under code review and waiting for spec approval. I will be leading this effort and plan to have a demo on how far we have come and to collect the community feedback. Thanks, Dariush From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: June-27-19 1:31 AM To: Rowsell, Brent >; Peters, Matt >; Wold, Saul > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 6/26 Brent, I noticed that we have a storyboard [1] approved for stx.3.0 from Matt. By reading the spec [2], I am thinking this might fall into non-openstack-dist project scope. Can I volunteer to manage the progress in the community? Thx. - cindy [1] https://storyboard.openstack.org/#!/story/2005733 [2] https://review.opendev.org/#/c/665208/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Jun 28 00:26:33 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 28 Jun 2019 00:26:33 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FB53F8@SHSMSX104.ccr.corp.intel.com> Bruce, We had discussion in non-OpenStack-dist meeting about features could be part of the sub-project, and below items are not on your list and I’d like to bring them up here as well: 1. Redfish support (https://storyboard.openstack.org/#!/story/2005861) 2. Python2to3 transition (https://wiki.openstack.org/wiki/StarlingX/Python2, story to be created) 3. non-Openstack patch cleanup (TBD) 4. Ceph containerization (https://storyboard.openstack.org/#!/story/2005527, spec under review: https://review.opendev.org/#/c/656371/) 5. Support Kata container (story to be created), 6. QAT support in Cinder & Glance (story to be created). 7. Systemd standardization. Needs to standarization to use systemd and move away from hybrid mode. Only 1 and 4 were included in your list. I was assuming that 2 and 3 were approved and we will continue the effort. As for 5, Kata container support, Shuicheng is taking the lead the evaluate what needs to be done, and I will propose it as stretch goal for 3.0. As for 6, QAT support in Cinder & Glance, I’ve confirmed with Vivian that this is OpenStack upstream work and target to Train release. Thus if we pick up Train, StarlingX will have the feature. Saul is pushing 7 and I think it will be great if we can standardize it. I can create the SB for those not yet, and put them onto https://etherpad.openstack.org/p/stx-r3-feature-candidates Thx. - cindy From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, June 28, 2019 5:25 AM To: Miller, Frank Cc: Eslimi, Dariush ; Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 3.0 features On Thu, Jun 27, 2019 at 4:12 PM Miller, Frank > wrote: For the Containerized OpenStack Clients feature, this one was mostly done for stx2.0 and does not need much more effort to complete in stx3.0. As this was tracked in the containerization subproject it makes the most sense for me to continue to track this one. Dean as you were the original author of the openstack client commands, it would be great to have your input for this one once Stefan has completed his testing and the current gerrit review updated. I expect this to occur shortly after the stx3.0 branch is available. Sure, I'd be glad to. Note that I released the last OSC 3.x (3.19.0) and companion osc-lib a couple of weeks ago, it should be a clean upgrade and includes some much-improved live migration handling that we finally resolved in Denver last month. Next up: OSC4 finally! dt -- Dean Troyer dtroyer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Jun 28 00:42:29 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 28 Jun 2019 00:42:29 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - June 27/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1540492@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release meeting agenda / notes June 27 2019 stx.2.0 - Feature Exceptions Status - Most are going in this week and next -- July 12 - Agreed to delay package upversions from Erich until RC1 given the upversion is not required to address any bugs. - Feature Testing Status - Tracker: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1717644237 - Containers - Ada - 75/41/1/2 - Numan - 82/58/3/0 - Question to Ada/Numan: Does the container testing cover ironic and the final nova overrides being added? - OpenStack patch elimination - Ada - 50/21/0/6 - Numan - 8/4/4/0 - CentOS 7.6 - QAT - 12/9/0/0 - Containerized OVS - 15/12/0/0 - Ceph upgrade - 26/16/0/6 - Due to hardware requirements for these tests, tests have been delayed (all our hardware is being used for regression testing). Working on scheduling the execution of the remaining ones. Also, we still have 6 blocked waiting for feedback on the instructions. I'm reforecasting the due date to 07/12. - As per Elio, the team plans to add more testing for IPv6. Plans are being worked. Some are covered under regression already, but scope can be expanded to cover k8s cluster network. - Regression testing: - Tracker: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?usp=sharing - Reports to be sent to mailing list on Tuesdays and Thursdays. - Running with ISO 20190621T013000Z - Question to Ada/Numan: How often will regression be picking up new green sanity loads? - As per Elio, the plan is to update the regression labs with a new green load every week (Fridays) - Total / Pass / Fail / Blocked = 421 / 123 / 7 / 2 - Percentage complete is higher as Numan's team still need to update the tracker - Testcase First Pass Execution for regression: July 5 - Plan to run another full execution cycle in 3wks: July 26 - Then focus on bug retest until stx.2.0 is out - Launchpads: - Will use stx.regression tag to identify bugs found during regression: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.regression (currently showing 11 open) - Ghada to send a note to the mailing list to remind the test teams -- stx.sanity & stx.regression - List from Ada: - Controller can't unlock after lock on AIO-SX - https://bugs.launchpad.net/starlingx/+bug/1833472 - user does not login within configured time(60s) login is aborted - https://bugs.launchpad.net/starlingx/+bug/1833469 - removing attributes from bash.log should not be possible - https://bugs.launchpad.net/starlingx/+bug/1833619 - sysadmin user not locked out after 5 wrong password attempts - https://bugs.launchpad.net/starlingx/+bug/1834116 stx.3.0 - Release dates were agreed call on the community call on 6/26 - https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=0 - MS-1 Next Steps - Start compiling the list Feature backlog tracking - AR from TSC meeting 6/27 Options: - Jira - Pros: well known, full featured. Cons: Not open source, not integrated with opendev infra - Trello - Pros: free, full featured, light and easy to use Cons: Not open source, not integrated with opendev infra - Storyboard - Pros: part of opendev infra, already in use. Cons: Missing key features - Continue to use storyboard for more "mature" items that are in getting into the implementation phase - Launchpad bugs - Pros: part of opendev infra, already in use. Cons: Already used for bugs, not really suited for feature tracking - Launchpad blueprints - Pros: part of opendev infra, has key features (priority, approval, owner, tracking). Cons: Not yet configured for StarlingX - Recommended by Bruce - Use for backlog tracking until approval for inclusion in a release. Then use storyboard for the implementation phase. - Bruce to publish to the mailing list. Then review in the community call - Google sheet to track backlog - To date, we've used google sheets to track the dates for the items/features planned for a release From Ghada.Khalil at windriver.com Fri Jun 28 00:49:31 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 28 Jun 2019 00:49:31 +0000 Subject: [Starlingx-discuss] [Test] Tags for launchpad bugs Message-ID: <151EE31B9FCCA54397A757BC674650F0C15404C3@ALA-MBD.corp.ad.wrs.com> To help with categorizing bugs, the release team would like to remind the test teams to continue using the following tags in launchpad: - stx.sanity -- for issues reported from sanity - stx.regression -- for issues reported from regression testing Launchpad queries: - Sanity: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.sanity - Regression: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.regression Regards, Ghada From Ghada.Khalil at windriver.com Fri Jun 28 00:59:57 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 28 Jun 2019 00:59:57 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 06/27 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1540512@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Team Meeting Agenda/Notes - Jun 27/2019 Bugs: - https://bugs.launchpad.net/starlingx/+bug/1832697 , need comments from matt - https://bugs.launchpad.net/starlingx/+bug/1832047 , not seeing this bug on duplex virtual env. I am trying to reproducing on BM env - https://bugs.launchpad.net/starlingx/+bug/1833463 , bond is not supported in openstack-helm yet. But the community is happy to see this feature implemented. - https://bugs.launchpad.net/starlingx/+bug/1829403 , pending on Peng's reply. - https://bugs.launchpad.net/starlingx/+bug/1831130 , pending on Elio's reply. Features: - Multus / SRIOV CNI Plugins - Having tried the new version and sent out the report by email. Networking Test Status - Containerized OVS - Testing is almost done. Elio will send a report to the mailing list. - IPv6 - As per Elio, the team plans to add more testing for IPv6. Plans are being worked. Some are covered under regression already, but scope can be expanded to cover k8s cluster network. - One bug to keep in mind: https://bugs.launchpad.net/starlingx/+bug/1834234 From Brent.Rowsell at windriver.com Fri Jun 28 01:46:28 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Fri, 28 Jun 2019 01:46:28 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: <9A85D2917C58154C960D95352B22818BD07787EE@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EC2565E0F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD07787EE@fmsmsx123.amr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC25671EE@ALA-MBD.corp.ad.wrs.com> Bruce, Since there won't be another TSC meeting til Jul 11th, can you provide some more detail on the 3 not discussed items. Thanks, Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 12:16 PM To: Rowsell, Brent Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 3.0 features Thank you, Brent for your review and questions. 1) FPGA accelerator support is for OpenStack (e.g. Cyborg integration) 2) Agree that the Containerized Ceph spec can/should be split 3) "Lead" is the person responsible internally for getting the work done and is the contact for any questions about the feature. It may or may not be the person who writes the spec. brucej From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, June 27, 2019 8:58 AM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 3.0 features Bruce, Thanks. A couple of comments/questions. 1) What's the difference between FPGA accelerator support and k8s fpga device plugin. As discussed at the TSC two wks ago, I have a dev that will be doing a spec for the later. 2) Containerized CEPH. It would be good to break this into two specs I think, one for prep content (R3) and one to complete the integration 3) What do we mean by Lead ? Spec owner ? Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 11:44 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the "Not yet discussed" features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo'd @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Fri Jun 28 03:45:57 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Fri, 28 Jun 2019 11:45:57 +0800 Subject: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time Message-ID: <45B09BAC-9214-41B0-AF8A-75206C778C85@99cloud.net> Hi guys, When I use sysinv to bring up the containerized services, platform-integ-apps is stay applying for a long time. I wonder it may be caused by recent poweroff. Now I have reactived the controllers, and can you tell me how to re-apply it? [root at controller-0 log(keystone_admin)]# system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ [root at controller-0 log(keystone_admin)]# system application-list +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ | application | version | manifest name | manifest file | status | progress | +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applying | processing chart: stx-rbd-provisioner, overall completion: 50.0% | | stx-openstack | 1.0-16-centos-stable- | armada-manifest | stx-openstack.yaml | uploading | validating and uploading charts | | | latest | | | | | | | | | | | | +---------------------+---------------------------+-------------------------------+--------------------+-----------+—————————————————————————————————+ Thanks Kunpeng From ezpeerchen at gmail.com Fri Jun 28 07:29:56 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Fri, 28 Jun 2019 15:29:56 +0800 Subject: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time In-Reply-To: <45B09BAC-9214-41B0-AF8A-75206C778C85@99cloud.net> References: <45B09BAC-9214-41B0-AF8A-75206C778C85@99cloud.net> Message-ID: Dear Kunpeng, Your progress status is applying. If it shows apply-failed status, then do re-apply. Command: # system application-apply platform-integ-apps Thanks 张鲲鹏 於 2019年6月28日 週五 上午11:48寫道: > Hi guys, > > When I use sysinv to bring up the containerized services, > platform-integ-apps is stay applying for a long time. I wonder it may be > caused by recent poweroff. Now I have reactived the controllers, and can > you tell me how to re-apply it? > > [root at controller-0 log(keystone_admin)]# system host-list > > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | > availability | > > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | > available | > | 2 | controller-1 | controller | unlocked | enabled | > available | > > +----+--------------+-------------+----------------+-------------+--------------+ > [root at controller-0 log(keystone_admin)]# system application-list > > +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ > | application | version | manifest name > | manifest file | status | progress > | > > +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ > | platform-integ-apps | 1.0-7 | > platform-integration-manifest | manifest.yaml | applying | processing > chart: stx-rbd-provisioner, overall completion: 50.0% | > | stx-openstack | 1.0-16-centos-stable- | armada-manifest > | stx-openstack.yaml | uploading | validating and uploading charts > | > | | latest | > | | | > | > | | | > | | | > | > > +---------------------+---------------------------+-------------------------------+--------------------+-----------+—————————————————————————————————+ > > Thanks > Kunpeng > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Fri Jun 28 07:42:47 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 28 Jun 2019 07:42:47 +0000 Subject: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time In-Reply-To: References: <45B09BAC-9214-41B0-AF8A-75206C778C85@99cloud.net> Message-ID: <9700A18779F35F49AF027300A49E7C76608AE545@SHSMSX105.ccr.corp.intel.com> Hi Kunpeng, You could check /var/log/armada/platform-integ-apps-apply.log, to get details where the process stuck at. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 28, 2019 3:30 PM To: 张鲲鹏 Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time Dear Kunpeng, Your progress status is applying. If it shows apply-failed status, then do re-apply. Command: # system application-apply platform-integ-apps Thanks 张鲲鹏 > 於 2019年6月28日 週五 上午11:48寫道: Hi guys, When I use sysinv to bring up the containerized services, platform-integ-apps is stay applying for a long time. I wonder it may be caused by recent poweroff. Now I have reactived the controllers, and can you tell me how to re-apply it? [root at controller-0 log(keystone_admin)]# system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ [root at controller-0 log(keystone_admin)]# system application-list +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ | application | version | manifest name | manifest file | status | progress | +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applying | processing chart: stx-rbd-provisioner, overall completion: 50.0% | | stx-openstack | 1.0-16-centos-stable- | armada-manifest | stx-openstack.yaml | uploading | validating and uploading charts | | | latest | | | | | | | | | | | | +---------------------+---------------------------+-------------------------------+--------------------+-----------+—————————————————————————————————+ Thanks Kunpeng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Fri Jun 28 07:53:26 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Fri, 28 Jun 2019 07:53:26 +0000 Subject: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time In-Reply-To: <9700A18779F35F49AF027300A49E7C76608AE545@SHSMSX105.ccr.corp.intel.com> References: <45B09BAC-9214-41B0-AF8A-75206C778C85@99cloud.net> <9700A18779F35F49AF027300A49E7C76608AE545@SHSMSX105.ccr.corp.intel.com> Message-ID: Hi Kunpeng, I met the same issue before. As a result of poweroff, the status will always be applying because the job has been killed and the status in the database won’t be changed from applying to apply-failed. What’s more, you can’t re-apply because the status is applying. I changed the status in the database and then re-apply. Hope the following commands can be useful: cat /etc/sysinv/sysinv.conf | grep sql connection=postgresql+psycopg2://admin-sysinv:d0df960de712Ti0*@192.168.204.2/sysinv psql -h 192.168.204.2 -U admin-sysinv -W sysinv d0df960de712Ti0* select * from kube_app; update kube_app set status=’apply-failed’ where name=’platform-integ-apps’; select * from kube_app; \q system application-apply platform-integ-apps Best Regards, Xu, Chenjie From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, June 28, 2019 3:43 PM To: Ezpeer Chen ; 张鲲鹏 Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time Hi Kunpeng, You could check /var/log/armada/platform-integ-apps-apply.log, to get details where the process stuck at. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 28, 2019 3:30 PM To: 张鲲鹏 > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time Dear Kunpeng, Your progress status is applying. If it shows apply-failed status, then do re-apply. Command: # system application-apply platform-integ-apps Thanks 张鲲鹏 > 於 2019年6月28日 週五 上午11:48寫道: Hi guys, When I use sysinv to bring up the containerized services, platform-integ-apps is stay applying for a long time. I wonder it may be caused by recent poweroff. Now I have reactived the controllers, and can you tell me how to re-apply it? [root at controller-0 log(keystone_admin)]# system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ [root at controller-0 log(keystone_admin)]# system application-list +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ | application | version | manifest name | manifest file | status | progress | +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applying | processing chart: stx-rbd-provisioner, overall completion: 50.0% | | stx-openstack | 1.0-16-centos-stable- | armada-manifest | stx-openstack.yaml | uploading | validating and uploading charts | | | latest | | | | | | | | | | | | +---------------------+---------------------------+-------------------------------+--------------------+-----------+—————————————————————————————————+ Thanks Kunpeng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tee.Ngo at windriver.com Fri Jun 28 11:57:08 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Fri, 28 Jun 2019 11:57:08 +0000 Subject: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time In-Reply-To: References: <45B09BAC-9214-41B0-AF8A-75206C778C85@99cloud.net> <9700A18779F35F49AF027300A49E7C76608AE545@SHSMSX105.ccr.corp.intel.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A6350A@ALA-MBD.corp.ad.wrs.com> There is an existing LP with the same cause: https://bugs.launchpad.net/starlingx/+bug/1833323 Until it is resolved, issue the following commands if the application status are stuck in applying/removing because sysinv-conductor was abruptly terminated (power outage, process restarted by sm due to missing audits, process killed/system boot due to OOM, etc…) sudo -u postgres psql -d sysinv -c "update kube_app set status='apply-failed' where status='applying';" sudo -u postgres psql -d sysinv -c "update kube_app set status='remove-failed' where status='removing';" You can then retrigger the application apply/remove. Tee From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: June-28-19 3:53 AM To: Lin, Shuicheng; Ezpeer Chen; 张鲲鹏 Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time Hi Kunpeng, I met the same issue before. As a result of poweroff, the status will always be applying because the job has been killed and the status in the database won’t be changed from applying to apply-failed. What’s more, you can’t re-apply because the status is applying. I changed the status in the database and then re-apply. Hope the following commands can be useful: cat /etc/sysinv/sysinv.conf | grep sql connection=postgresql+psycopg2://admin-sysinv:d0df960de712Ti0*@192.168.204.2/sysinv psql -h 192.168.204.2 -U admin-sysinv -W sysinv d0df960de712Ti0* select * from kube_app; update kube_app set status=’apply-failed’ where name=’platform-integ-apps’; select * from kube_app; \q system application-apply platform-integ-apps Best Regards, Xu, Chenjie From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, June 28, 2019 3:43 PM To: Ezpeer Chen ; 张鲲鹏 Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time Hi Kunpeng, You could check /var/log/armada/platform-integ-apps-apply.log, to get details where the process stuck at. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Friday, June 28, 2019 3:30 PM To: 张鲲鹏 > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [stx-config] platform-integ-apps was applying with a long time Dear Kunpeng, Your progress status is applying. If it shows apply-failed status, then do re-apply. Command: # system application-apply platform-integ-apps Thanks 张鲲鹏 > 於 2019年6月28日 週五 上午11:48寫道: Hi guys, When I use sysinv to bring up the containerized services, platform-integ-apps is stay applying for a long time. I wonder it may be caused by recent poweroff. Now I have reactived the controllers, and can you tell me how to re-apply it? [root at controller-0 log(keystone_admin)]# system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ [root at controller-0 log(keystone_admin)]# system application-list +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ | application | version | manifest name | manifest file | status | progress | +---------------------+---------------------------+-------------------------------+--------------------+-----------+------------------------------------------------------------------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applying | processing chart: stx-rbd-provisioner, overall completion: 50.0% | | stx-openstack | 1.0-16-centos-stable- | armada-manifest | stx-openstack.yaml | uploading | validating and uploading charts | | | latest | | | | | | | | | | | | +---------------------+---------------------------+-------------------------------+--------------------+-----------+—————————————————————————————————+ Thanks Kunpeng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Fri Jun 28 13:02:27 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Fri, 28 Jun 2019 13:02:27 +0000 Subject: [Starlingx-discuss] EdgeX deeper integration? In-Reply-To: <8b2eee427bc937014881a13cb45414bfb8c19443.camel@intel.com> References: <9A85D2917C58154C960D95352B22818BD0778EFA@fmsmsx123.amr.corp.intel.com> <8b2eee427bc937014881a13cb45414bfb8c19443.camel@intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A82808@ALA-MBD.corp.ad.wrs.com> There's definitely interest from the Akraino/StarlingX perspective. For those that aren't aware, the EdgeX Foundry application is part of Akraino's StarlingX blueprint [0]. As plans unfold for the next release of Akraino, we will be happy to have contributions in this area. [0] https://wiki.akraino.org/display/AK/StarlingX+Far+Edge+Distributed+Cloud -----Original Message----- From: Cordoba Malibran, Erich Sent: Thursday, June 27, 2019 5:01 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] EdgeX deeper integration? Actually, in the ongoing review for the pytest framework there is this setup of EdgeX in k8s. https://review.opendev.org/#/c/665419/3/automated-pytest-suite/testcases/functional/z_containers/test_kube_edgex_services.py This could be a good start point to create an EdgeX application. -Erich On Thu, 2019-06-27 at 19:54 +0000, Jones, Bruce E wrote: > We had an internal discussion today about EdgeX. We are seeing signs > of it increasing in use and importance in the Edge ecosystem. > > It is fairly straightforward to build and run an EdgeX application > under StarlingX today. We had it running in the Intel booth at the > Denver Summit. > > My question for the community is this: Is there value or interest in > making EdgeX apps even easier to run within StarlingX? For example, > we could create an EdgeX application in StarlingX and allow users to > apply it to the system, to allow the EdgeX services to run and be > managed by StarlingX. This would add some ease of use benefits for > EdgeX users while also putting us in the position of maintaining an up > to date version of EdgeX. > > Is this something we should work on as a community? > > brucej > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Fri Jun 28 15:30:59 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 28 Jun 2019 15:30:59 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EC25671EE@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EC2565E0F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD07787EE@fmsmsx123.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EC25671EE@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BD07794E7@fmsmsx123.amr.corp.intel.com> "IA Platform features" are things like RDT (Resource Director Technology), SGX (Software Guard Extensions) and EPID (Extended Platform Identification). These features tend to get added automatically when we upgrade to newer components but should be tested within StarlingX. "Performance Testing" is creating an open framework to measure key performance indicators for StarlingX - things like network latency, fault detection times and so forth. "IOT Device Management" is building on the demo you showed at Denver and taking that to the next level, enabling IOT gateways and similar devices. brucej From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, June 27, 2019 6:46 PM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 3.0 features Bruce, Since there won't be another TSC meeting til Jul 11th, can you provide some more detail on the 3 not discussed items. Thanks, Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 12:16 PM To: Rowsell, Brent > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 3.0 features Thank you, Brent for your review and questions. 1) FPGA accelerator support is for OpenStack (e.g. Cyborg integration) 2) Agree that the Containerized Ceph spec can/should be split 3) "Lead" is the person responsible internally for getting the work done and is the contact for any questions about the feature. It may or may not be the person who writes the spec. brucej From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, June 27, 2019 8:58 AM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 3.0 features Bruce, Thanks. A couple of comments/questions. 1) What's the difference between FPGA accelerator support and k8s fpga device plugin. As discussed at the TSC two wks ago, I have a dev that will be doing a spec for the later. 2) Containerized CEPH. It would be good to break this into two specs I think, one for prep content (R3) and one to complete the integration 3) What do we mean by Lead ? Spec owner ? Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 11:44 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the "Not yet discussed" features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo'd @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Jun 28 22:27:45 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 28 Jun 2019 22:27:45 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190628 Message-ID: <8557B550001AFB46A43A0CCC314BF85168772504@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-June-28 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Jun 28 23:15:25 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 28 Jun 2019 23:15:25 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Release meeting Message-ID: <151EE31B9FCCA54397A757BC674650F0C1540B97@ALA-MBD.corp.ad.wrs.com> There will be no release meeting on Thursday July 4th due to the stat holiday in the US. Weekly meeting on Thursday 11AM PT / 1900 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1751 bytes Desc: not available URL: From Frank.Miller at windriver.com Sat Jun 29 02:07:35 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Sat, 29 Jun 2019 02:07:35 +0000 Subject: [Starlingx-discuss] Canceled: Weekly Containerization Meeting July 1 & July 8 Message-ID: Please note that we will not be holding a StarlingX containerization meeting on Monday July 1st due to a national holiday nor on Monday July 8th due to vacation. Our next meeting will be held Monday July 15th. Frank Containers Project Lead -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Sat Jun 29 00:05:09 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Sat, 29 Jun 2019 00:05:09 +0000 Subject: [Starlingx-discuss] QAT Validation In-Reply-To: References: <000501d51cc3$4d1f3930$e75dab90$@neusoft.com> Message-ID: Hello StarlingX QAT team, Today I have finished the QAT test execution. You can see the results here: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=84126711 About the QATzip library, it has been done successfully. This test has taken specially some amount of time, because of a lot of steps that needs to be done: A customized CentOS image with the QAT driver and the QATZip library inside). Here some part of the output about such tests: [root at host-192-168-101-176 multiple_process_opt]# adf_ctl qat_dev0 status Checking status of device qat_dev0 qat_dev0 - type: c6xxvf, inst_id: 0, node_id: 0, bsf: 0000:00:05.0, #accel: 1 #engines: 1 state: up Reading input file test.tmp (70 Bytes) Compressing... Time taken: 0.137 ms Throughput: 4.088 Mbit/s Space Savings: 20.000 % Compression ratio: 1.250 : 1 Reading input file test_out.gz (56 Bytes) Decompressing... Time taken: 0.022 ms Throughput: 25.455 Mbit/s QAT file compression and decompression OK :) Stopping all devices. SW file compression and decompression OK :) Starting all devices. Processing /etc/c6xxvf_dev0.conf Reading input file test.tmp.gz (6507054 Bytes) Decompressing... Time taken: 71.330 ms Throughput: 1425.935 Mbit/s Reading input file test.tmp (1 Bytes) Compressing... Time taken: 0.105 ms Throughput: 0.076 Mbit/s Space Savings: -3400.000 % Compression ratio: 0.029 : 1 Reading input file test.tmp.gz (6507054 Bytes) Decompressing... Time taken: 71.330 ms Throughput: 1425.935 Mbit/s There is only 1 test Not Tested, because we are still waiting the definition of the proper steps to be executed. Please let me know any comment that you might have. Regards -Ricardo From: Perez, Ricardo O Sent: Thursday, June 13, 2019 11:16 PM To: Perez, Ricardo O ; starlingx-discuss at lists.starlingx.io Cc: Lin, Shuicheng ; Wang, Hai Tao ; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX ; Su Yang ; zhaos at neusoft.com; Cabrales, Ada ; Xie, Cindy Subject: RE: QAT Validation Hello StarlingX guys, Finally, with the help from Cindy��s Team and Neusoft Team, finally we are able to have the QAT (PCIe card version) up a and running. Here you can see a quick console output where you can see the QAT device, seen in a VM launched in a guest with the described hardware passed to the VM via pci_passtrough flavor property. controller-0:~# sudo virsh list Id Name State ----------------------------------- 5 instance-00000018 running controller-0:~# sudo virsh console 5 Connected to domain instance-00000018 Escape character is ^] login as 'cirros' user. default password: 'gocubsgo'. use 'sudo' for root. qatricho2 login: cirros Password: $ lspci 00:00.0 Class 0600: 8086:1237 00:01.0 Class 0601: 8086:7000 00:01.1 Class 0101: 8086:7010 00:01.2 Class 0c03: 8086:7020 00:01.3 Class 0680: 8086:7113 00:02.0 Class 0300: 1013:00b8 00:03.0 Class 0200: 1af4:1000 00:04.0 Class 0100: 1af4:1001 00:05.0 Class 0b40: 8086:37c9 --- > This is the QAT device passed with pci_passthrough property inside a VM using CirrOS. 00:06.0 Class 0b40: 8086:37c9 --- > You can see 2 devices, because I have used 2 VFs in this case, but that is out of the scope of this e-mail. 00:07.0 Class 00ff: 1af4:1002 How do I know that 37c9 is the QAT device ? Using the following command: controller-0:~$ sudo lspci | grep Co-processor Password: 3d:00.0 Co-processor: Intel Corporation C62x Chipset QuickAssist Technology (rev 04) 3d:01.0 Co-processor: Intel Corporation Device 37c9 (rev 04) So far, this is the current status for QAT Feature testing execution Test Cases Total: 12 Passed: 9 Failed: 0 :) N/A: 1 --- > This doesn��t apply for Simplex configuration (where I��m actually running the tests) Not Executed: 1 --- > Related to Rest API (we are still working on the steps for this one) In Progress 1: Related to made use of the QATZlib library So tomorrow I��ll continue working on the QATZlib, and the rest api, however this last one is marked as priority 2. If you require further details about the QAT test progress, please go to: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=84126711 Thanks in advance -Ricardo From: Perez, Ricardo O [mailto:ricardo.o.perez at intel.com] Sent: Tuesday, June 11, 2019 4:32 PM To: starlingx-discuss at lists.starlingx.io Cc: Lin, Shuicheng >; Wang, Hai Tao >; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX >; Su Yang >; zhaos at neusoft.com; Cabrales, Ada >; Xie, Cindy > Subject: Re: [Starlingx-discuss] QAT Validation Hi Zhao, I have tried both ISOs in the current WolfPass Server using the external PCIe QAT device. However I��m still hitting the same error after setting up the pci_passthorugh property in the flavor and trying to launch a VM using such flavor. [cid:image002.jpg at 01D52DE3.7DDC18A0] I��m just finishing the installation of StarlingX in a server with an embedded QAT device. As soon as I finished I��ll let you know the results. Thanks in advance -Ricardo From: Perez, Ricardo O Sent: Thursday, June 6, 2019 11:00 PM To: 'zhaos at neusoft.com' > Cc: Lin, Shuicheng >; Wang, Hai Tao >; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX >; Su Yang > Subject: RE: QAT Validation Hi Zhao, I have just read your �Cemail. Thanks for letting me know that you guys are in a holiday (I didn��t knew) :). Let me check with the proposed image and see how it goes. I��ll let you know the results by e-mail. Thanks -Ricardo From: zhaos at neusoft.com [mailto:zhaos at neusoft.com] Sent: Thursday, June 6, 2019 6:55 PM To: Perez, Ricardo O > Cc: Lin, Shuicheng >; Wang, Hai Tao >; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Zhao, ShuaiX >; zhaos at neusoft.com; Su Yang > Subject: ��: QAT Validation Hi Ricardo: Because our colleagues are currently in China's Dragon Boat Festival holiday, we may not be able to participate in your meeting today. We expect to schedule an appointment next Tuesday (6/11). we are very sorry that we cannot attend your today meeting. To provide you with the operation guide, we have actually operated many times, please be sure to ensure the order of operation to perform. Second, if there are still problems, we recommend using the version (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190530T152953Z/) to try again. Above, thank you! Wish you happy everyday! -------------------------------- From��zhao.shuai Tel: 13704099430 Co.:Neusoft -----ԭʼԼ��----- ������: Perez, Ricardo O > ����ʱ��: 2019��6��7�� 5:05 �ռ���: Lin, Shuicheng; Wang, Hai Tao; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; zhaos at neusoft.com ����: QAT Validation ʱ��: 2019��6��6�������� 23:00 �� 2019��6��7�������� 0:00(UTC-06:00) �ϴ���������ī����ǣ������ס� �ص�: https://zoom.us/j/2962988538 ��Ҫ��: �� Hello guys, I��m following all your steps using the provided files and here is the status: �� ISO installation + provided helm charts �C success �� Nova overrides using provided yaml file �C failing So I would like to have a live session to show you the errors and see what is still missing from my side. P.S. You can forward this meeting to required people also. Thanks -Ricardo Ricardo Perez is inviting you to a scheduled Zoom meeting. Join Zoom Meeting https://zoom.us/j/2962988538 One tap mobile +14086380968,,2962988538# US (San Jose) +16465588656,,2962988538# US (New York) Dial by your location +1 408 638 0968 US (San Jose) +1 646 558 8656 US (New York) Meeting ID: 296 298 8538 Find your local number: https://zoom.us/u/abJfeFY5aC -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 18189 bytes Desc: image002.jpg URL: