From yi.c.wang at intel.com Mon Jul 1 01:12:06 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Mon, 1 Jul 2019 01:12:06 +0000 Subject: [Starlingx-discuss] [docs] one system command was removed by design Message-ID: Hi docs team, The command "system firewall-rules-install" was intentionally removed when doing the storyboad https://storyboard.openstack.org/#!/story/2005066 . Without this command, customers still can add their firewall rules by Calico policy. Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Mon Jul 1 01:45:01 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Mon, 1 Jul 2019 01:45:01 +0000 Subject: [Starlingx-discuss] a question about starlingx error handling behavior Message-ID: Hi Eric, I am working on the LP 1815513. Based on my tests, if I unplug the cable of active controller for management network for a long time (for example, 30s), and then plug it, the whole system can recover after some reboots. But if I unplug the cable for a short time, and then plug it. The whole system can't recover. I need to lock/unlock controllers manually to bring the system back. So my questions are: 1. Is the behavior acceptable? (recover the system by manual lock/unlock operations) 2. If the answer is no for #1, we need the system to recover automatically. I am not familiar with internal maintenance logics, could you give me some hints? Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jul 1 02:51:28 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 1 Jul 2019 02:51:28 +0000 Subject: [Starlingx-discuss] [BUG] armada stuck at pod waiting due to no pod exist for chart Message-ID: <9700A18779F35F49AF027300A49E7C76608B15F2@SHSMSX105.ccr.corp.intel.com> Hi all, When I debug LP issue [0], I suspect it is armada issue, so I created story [1] in armada project. The issue is that " StarlingX uses armada to manage the helm chart. And we find a bug [0] recently. After debug it, it should be due to armada assume there is at least 1 pod for each chart in the wait for resource become ready logic. But it is not true for some corner case. Such as the osh-openstack-ceph-rgw chart in StarlingX. Currently, it has 3 job only which requires one time execution. The pod for job is cleared after host reboot, so when do chart re-apply, there is no pod exist for the chart. And 'required' is set to True for pod in default, which indicates there is at least 1 pod. This lead to function _watch_resource_completions in wait.py stuck at w.stream(self.get_resources, **kwargs). " Could you help review my comments, and share me your suggestion for the issue? And do you agree with my fix suggestion in the story [1]? Thanks. [0]: https://bugs.launchpad.net/starlingx/+bug/1833609 [1]: https://storyboard.openstack.org/#!/story/2006133 Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Mon Jul 1 08:26:56 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 1 Jul 2019 08:26:56 +0000 Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Message-ID: Hi Tee, Could you please take a look at the following bug and leave a comment? https://bugs.launchpad.net/starlingx/+bug/1833622 This bug is caused by "system application-apply stx-openstack stuck at processing chart: osh-openstack-ceph-rgw, overall completion: 44.0%". And this bug is similar to https://bugs.launchpad.net/starlingx/+bug/1833323 which is assigned to you. I think the following scenarios related to applying application should be addressed: 1. During applying application, the host reboot; 2. Forbidding the openstack command before re-applying application successfully. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jul 1 08:44:11 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 1 Jul 2019 08:44:11 +0000 Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C76608B1743@SHSMSX105.ccr.corp.intel.com> Hi Chenjie, There is another LP issue [0] track the 44% apply issue, owned by me. And I suspect it is armada issue. Fix of the issue is still under debug now. Remove and apply the application again should be able to work-around the issue. [0]: https://bugs.launchpad.net/starlingx/+bug/1833609 Best Regards Shuicheng From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Monday, July 1, 2019 4:27 PM To: Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS ; Peng, Peng ; Zhao, Forrest Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Tee, Could you please take a look at the following bug and leave a comment? https://bugs.launchpad.net/starlingx/+bug/1833622 This bug is caused by "system application-apply stx-openstack stuck at processing chart: osh-openstack-ceph-rgw, overall completion: 44.0%". And this bug is similar to https://bugs.launchpad.net/starlingx/+bug/1833323 which is assigned to you. I think the following scenarios related to applying application should be addressed: 1. During applying application, the host reboot; 2. Forbidding the openstack command before re-applying application successfully. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Mon Jul 1 08:50:44 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 1 Jul 2019 08:50:44 +0000 Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% In-Reply-To: <9700A18779F35F49AF027300A49E7C76608B1743@SHSMSX105.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C76608B1743@SHSMSX105.ccr.corp.intel.com> Message-ID: Hi Shuicheng, Thank you for your information! This bug is found during an automation testing. I will look at your bug and give some feedbacks. Best Regards, Xu, Chenjie From: Lin, Shuicheng Sent: Monday, July 1, 2019 4:44 PM To: Xu, Chenjie ; Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS ; Peng, Peng ; Zhao, Forrest Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Chenjie, There is another LP issue [0] track the 44% apply issue, owned by me. And I suspect it is armada issue. Fix of the issue is still under debug now. Remove and apply the application again should be able to work-around the issue. [0]: https://bugs.launchpad.net/starlingx/+bug/1833609 Best Regards Shuicheng From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Monday, July 1, 2019 4:27 PM To: Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS >; Peng, Peng >; Zhao, Forrest > Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Tee, Could you please take a look at the following bug and leave a comment? https://bugs.launchpad.net/starlingx/+bug/1833622 This bug is caused by "system application-apply stx-openstack stuck at processing chart: osh-openstack-ceph-rgw, overall completion: 44.0%". And this bug is similar to https://bugs.launchpad.net/starlingx/+bug/1833323 which is assigned to you. I think the following scenarios related to applying application should be addressed: 1. During applying application, the host reboot; 2. Forbidding the openstack command before re-applying application successfully. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Mon Jul 1 13:45:50 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 1 Jul 2019 13:45:50 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: <9A85D2917C58154C960D95352B22818BD07794E7@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EC2565E0F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD07787EE@fmsmsx123.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EC25671EE@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD07794E7@fmsmsx123.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FCE0B5@SHSMSX104.ccr.corp.intel.com> I've created a Storyboard for Kata support: https://storyboard.openstack.org/#!/story/2006145 we will do technical feasibility study and bring in/out proposal to release team. thx. - cindy From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, June 28, 2019 11:31 PM To: Rowsell, Brent Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 3.0 features "IA Platform features" are things like RDT (Resource Director Technology), SGX (Software Guard Extensions) and EPID (Extended Platform Identification). These features tend to get added automatically when we upgrade to newer components but should be tested within StarlingX. "Performance Testing" is creating an open framework to measure key performance indicators for StarlingX - things like network latency, fault detection times and so forth. "IOT Device Management" is building on the demo you showed at Denver and taking that to the next level, enabling IOT gateways and similar devices. brucej From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, June 27, 2019 6:46 PM To: Jones, Bruce E > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 3.0 features Bruce, Since there won't be another TSC meeting til Jul 11th, can you provide some more detail on the 3 not discussed items. Thanks, Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 12:16 PM To: Rowsell, Brent > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 3.0 features Thank you, Brent for your review and questions. 1) FPGA accelerator support is for OpenStack (e.g. Cyborg integration) 2) Agree that the Containerized Ceph spec can/should be split 3) "Lead" is the person responsible internally for getting the work done and is the contact for any questions about the feature. It may or may not be the person who writes the spec. brucej From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, June 27, 2019 8:58 AM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 3.0 features Bruce, Thanks. A couple of comments/questions. 1) What's the difference between FPGA accelerator support and k8s fpga device plugin. As discussed at the TSC two wks ago, I have a dev that will be doing a spec for the later. 2) Containerized CEPH. It would be good to break this into two specs I think, one for prep content (R3) and one to complete the integration 3) What do we mean by Lead ? Spec owner ? Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 11:44 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the "Not yet discussed" features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo'd @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Mon Jul 1 14:13:36 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Mon, 1 Jul 2019 14:13:36 +0000 Subject: [Starlingx-discuss] system host-if-modify error Message-ID: <8557B550001AFB46A43A0CCC314BF85168772896@FMSMSX108.amr.corp.intel.com> Hi, I have been using the below command to set a SRIOV interface: $ system host-if-modify -m -n -N -p -c pci-sriov $ system host-if-modify -m 1500 -n sriov1 -N 5 -p physnet0 -c pci-sriov compute-0 38922809-dec1-4e55-9f58-5db4b0859ae5 Command works correctly from ISO 20190627 and older. But now I got the following error: system: error: unrecognized arguments: -p 47569880-3225-4a96-b897-b7bf1d114b8d Seems that there is an error in the structure or syntax of command, but -p flag and interface UUID are separated by other parameters. I also try to use -d flag and interface name instead of uuid but got the same error. Do you know if this SRIOV command have changed? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Mon Jul 1 15:07:17 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Mon, 1 Jul 2019 15:07:17 +0000 Subject: [Starlingx-discuss] [docs] one system command was removed by design In-Reply-To: References: Message-ID: <3808363B39586544A6839C76CF81445EA1B803C4@ORSMSX104.amr.corp.intel.com> Thanks Yi. We'll discuss this in our upcoming meeting and will submit a PR to take care of this. -- Mike ________________________________ From: Wang, Yi C [yi.c.wang at intel.com] Sent: Sunday, June 30, 2019 6:12 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] [docs] one system command was removed by design Hi docs team, The command “system firewall-rules-install” was intentionally removed when doing the storyboad https://storyboard.openstack.org/#!/story/2005066 . Without this command, customers still can add their firewall rules by Calico policy. Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Jul 1 15:56:00 2019 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 1 Jul 2019 15:56:00 +0000 Subject: [Starlingx-discuss] system host-if-modify error Message-ID: <15816CA7-848B-41AC-B7F2-9DC0BB3E5521@intel.com> The following commit in “stx-config” removed “-d ” *after* 0627 build, so you cannot use either “-p” or “-d” with “system host-if-modify”. commit 99524d919a48aaff245bb29f8d8bb60347d4253b Author: Teresa Ho Date: Mon Jun 24 22:55:30 2019 -0400 Remove datanetworks param from interface commands Regards, Yong On 01/07/2019, 7:18 AM, "Alonso, Juan Carlos" > wrote: Hi, I have been using the below command to set a SRIOV interface: $ system host-if-modify -m -n -N -p -c pci-sriov $ system host-if-modify -m 1500 -n sriov1 -N 5 -p physnet0 -c pci-sriov compute-0 38922809-dec1-4e55-9f58-5db4b0859ae5 Command works correctly from ISO 20190627 and older. But now I got the following error: system: error: unrecognized arguments: -p 47569880-3225-4a96-b897-b7bf1d114b8d Seems that there is an error in the structure or syntax of command, but –p flag and interface UUID are separated by other parameters. I also try to use –d flag and interface name instead of uuid but got the same error. Do you know if this SRIOV command have changed? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Jul 1 18:29:20 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 1 Jul 2019 18:29:20 +0000 Subject: [Starlingx-discuss] DIstro.openstack meeting agenda for 7/2 Message-ID: <9A85D2917C58154C960D95352B22818BD077AD7B@fmsmsx123.amr.corp.intel.com> 7/2/19 meeting agenda: * Placement Helm chart review (Zhipeng) * Helm override status (Gerry) * Orphan instance cleanup (Yong Li) * NUMA topology spec approved! * Rebase to the new Nova branch Details can be found in https://etherpad.openstack.org/p/stx-distro-openstack-meetings brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Mon Jul 1 19:49:49 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 1 Jul 2019 14:49:49 -0500 Subject: [Starlingx-discuss] Multi-OS team meeting : Notes of the meeting: 7/01/19 Message-ID: Multi-OS team meeting Summary of the meeting: 7/01/19 - Opens - Start the WIP - Wind river + Yocto - Openstack.org now has a yocto subpage - https://wiki.openstack.org/wiki/StarlingX/MultiOS/Yocto - in GitHub meta starling x - https://github.com/zbsarashki/meta-starlingX.git - Saul will make some comments with Stephen on the directory layout structure offline - Open SUSE FLOCK services packaging update - We have 50 out of 59 complete: https://build.opensuse.org/project/show/Cloud:StarlingX:2.0 - We are working on the installation phase now to check run time dependencies - Saul, Marcela, and Intel team working on it. - We are capturing those changes - There is a case where we had some patches for open suse that will wait for RC1: - https://build.opensuse.org/package/show/Cloud:StarlingX:2.0/mtce - We need to test they don't break the build of the flock service build - we need to test they don't break installation and functionality - Is because of this that Saul has been workin gon the automation of this process - There is a Jenkins job monitoring the repos and if a change in the repo, starts an ansible to build and tests - Working on converting ansible to zuul - The idea is that zuul monitor changes in git if change takes the git tree, make it an osc repo to build and send it to obs to build ( sandbox repo ) - This will work also in Debian and possible on centos -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Tue Jul 2 01:37:48 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 2 Jul 2019 01:37:48 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190630 Message-ID: <8557B550001AFB46A43A0CCC314BF85168772B06@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-June-30 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Tue Jul 2 01:54:19 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 2 Jul 2019 01:54:19 +0000 Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% In-Reply-To: References: <9700A18779F35F49AF027300A49E7C76608B1743@SHSMSX105.ccr.corp.intel.com> Message-ID: Hi Shuicheng, I checked the armada logs and found the following logs are similar to your bug (https://bugs.launchpad.net/starlingx/+bug/1833609 ). Could you please help confirm that whether the bug ( https://bugs.launchpad.net/starlingx/+bug/1833622 ) is a duplicate bug of your bug or not? 2019-06-24 10:51:58.243 36 INFO armada.handlers.wait [-] [chart=openstack-ceph-rgw]: Waiting for resource type=pod, namespace=openstack labels=release_group=osh-openstack-ceph-rgw required=True for 1800s^[[00m 2019-06-24 10:51:58.243 36 DEBUG armada.handlers.wait [-] [chart=openstack-ceph-rgw]: Starting to wait on: namespace=openstack, resource type=pod, label_selector=(release_group=osh-openstack-ceph-rgw), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:362^[[00m Best Regards, Xu, Chenjie From: Xu, Chenjie Sent: Monday, July 1, 2019 4:51 PM To: Lin, Shuicheng ; Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS ; Peng, Peng ; Zhao, Forrest Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Shuicheng, Thank you for your information! This bug is found during an automation testing. I will look at your bug and give some feedbacks. Best Regards, Xu, Chenjie From: Lin, Shuicheng Sent: Monday, July 1, 2019 4:44 PM To: Xu, Chenjie >; Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS >; Peng, Peng >; Zhao, Forrest > Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Chenjie, There is another LP issue [0] track the 44% apply issue, owned by me. And I suspect it is armada issue. Fix of the issue is still under debug now. Remove and apply the application again should be able to work-around the issue. [0]: https://bugs.launchpad.net/starlingx/+bug/1833609 Best Regards Shuicheng From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Monday, July 1, 2019 4:27 PM To: Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS >; Peng, Peng >; Zhao, Forrest > Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Tee, Could you please take a look at the following bug and leave a comment? https://bugs.launchpad.net/starlingx/+bug/1833622 This bug is caused by "system application-apply stx-openstack stuck at processing chart: osh-openstack-ceph-rgw, overall completion: 44.0%". And this bug is similar to https://bugs.launchpad.net/starlingx/+bug/1833323 which is assigned to you. I think the following scenarios related to applying application should be addressed: 1. During applying application, the host reboot; 2. Forbidding the openstack command before re-applying application successfully. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Tue Jul 2 02:00:48 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 2 Jul 2019 02:00:48 +0000 Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% In-Reply-To: References: <9700A18779F35F49AF027300A49E7C76608B1743@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608B1991@SHSMSX105.ccr.corp.intel.com> Hi Chenjie, Per the log, it is a duplicate of my bug. Thanks. Best Regards Shuicheng From: Xu, Chenjie Sent: Tuesday, July 2, 2019 9:54 AM To: Lin, Shuicheng ; 'Tee.Ngo at windriver.com' Cc: 'starlingx-discuss at lists.starlingx.io' ; Liu, ZhipengS ; 'Peng, Peng' ; Zhao, Forrest Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Shuicheng, I checked the armada logs and found the following logs are similar to your bug (https://bugs.launchpad.net/starlingx/+bug/1833609 ). Could you please help confirm that whether the bug ( https://bugs.launchpad.net/starlingx/+bug/1833622 ) is a duplicate bug of your bug or not? 2019-06-24 10:51:58.243 36 INFO armada.handlers.wait [-] [chart=openstack-ceph-rgw]: Waiting for resource type=pod, namespace=openstack labels=release_group=osh-openstack-ceph-rgw required=True for 1800s^[[00m 2019-06-24 10:51:58.243 36 DEBUG armada.handlers.wait [-] [chart=openstack-ceph-rgw]: Starting to wait on: namespace=openstack, resource type=pod, label_selector=(release_group=osh-openstack-ceph-rgw), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:362^[[00m Best Regards, Xu, Chenjie From: Xu, Chenjie Sent: Monday, July 1, 2019 4:51 PM To: Lin, Shuicheng >; Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS >; Peng, Peng >; Zhao, Forrest > Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Shuicheng, Thank you for your information! This bug is found during an automation testing. I will look at your bug and give some feedbacks. Best Regards, Xu, Chenjie From: Lin, Shuicheng Sent: Monday, July 1, 2019 4:44 PM To: Xu, Chenjie >; Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS >; Peng, Peng >; Zhao, Forrest > Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Chenjie, There is another LP issue [0] track the 44% apply issue, owned by me. And I suspect it is armada issue. Fix of the issue is still under debug now. Remove and apply the application again should be able to work-around the issue. [0]: https://bugs.launchpad.net/starlingx/+bug/1833609 Best Regards Shuicheng From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Monday, July 1, 2019 4:27 PM To: Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS >; Peng, Peng >; Zhao, Forrest > Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Tee, Could you please take a look at the following bug and leave a comment? https://bugs.launchpad.net/starlingx/+bug/1833622 This bug is caused by "system application-apply stx-openstack stuck at processing chart: osh-openstack-ceph-rgw, overall completion: 44.0%". And this bug is similar to https://bugs.launchpad.net/starlingx/+bug/1833323 which is assigned to you. I think the following scenarios related to applying application should be addressed: 1. During applying application, the host reboot; 2. Forbidding the openstack command before re-applying application successfully. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Tue Jul 2 02:03:26 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 2 Jul 2019 02:03:26 +0000 Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% In-Reply-To: <9700A18779F35F49AF027300A49E7C76608B1991@SHSMSX105.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C76608B1743@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608B1991@SHSMSX105.ccr.corp.intel.com> Message-ID: Hi Shuicheng, Thank you so much! Best Regards, Xu, Chenjie From: Lin, Shuicheng Sent: Tuesday, July 2, 2019 10:01 AM To: Xu, Chenjie ; 'Tee.Ngo at windriver.com' Cc: 'starlingx-discuss at lists.starlingx.io' ; Liu, ZhipengS ; 'Peng, Peng' ; Zhao, Forrest Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Chenjie, Per the log, it is a duplicate of my bug. Thanks. Best Regards Shuicheng From: Xu, Chenjie Sent: Tuesday, July 2, 2019 9:54 AM To: Lin, Shuicheng >; 'Tee.Ngo at windriver.com' > Cc: 'starlingx-discuss at lists.starlingx.io' >; Liu, ZhipengS >; 'Peng, Peng' >; Zhao, Forrest > Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Shuicheng, I checked the armada logs and found the following logs are similar to your bug (https://bugs.launchpad.net/starlingx/+bug/1833609 ). Could you please help confirm that whether the bug ( https://bugs.launchpad.net/starlingx/+bug/1833622 ) is a duplicate bug of your bug or not? 2019-06-24 10:51:58.243 36 INFO armada.handlers.wait [-] [chart=openstack-ceph-rgw]: Waiting for resource type=pod, namespace=openstack labels=release_group=osh-openstack-ceph-rgw required=True for 1800s^[[00m 2019-06-24 10:51:58.243 36 DEBUG armada.handlers.wait [-] [chart=openstack-ceph-rgw]: Starting to wait on: namespace=openstack, resource type=pod, label_selector=(release_group=osh-openstack-ceph-rgw), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:362^[[00m Best Regards, Xu, Chenjie From: Xu, Chenjie Sent: Monday, July 1, 2019 4:51 PM To: Lin, Shuicheng >; Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS >; Peng, Peng >; Zhao, Forrest > Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Shuicheng, Thank you for your information! This bug is found during an automation testing. I will look at your bug and give some feedbacks. Best Regards, Xu, Chenjie From: Lin, Shuicheng Sent: Monday, July 1, 2019 4:44 PM To: Xu, Chenjie >; Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS >; Peng, Peng >; Zhao, Forrest > Subject: RE: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Chenjie, There is another LP issue [0] track the 44% apply issue, owned by me. And I suspect it is armada issue. Fix of the issue is still under debug now. Remove and apply the application again should be able to work-around the issue. [0]: https://bugs.launchpad.net/starlingx/+bug/1833609 Best Regards Shuicheng From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Monday, July 1, 2019 4:27 PM To: Tee.Ngo at windriver.com Cc: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS >; Peng, Peng >; Zhao, Forrest > Subject: [Starlingx-discuss] Can't boot VM because system application-apply stx-openstack stuck at 44.0% Hi Tee, Could you please take a look at the following bug and leave a comment? https://bugs.launchpad.net/starlingx/+bug/1833622 This bug is caused by "system application-apply stx-openstack stuck at processing chart: osh-openstack-ceph-rgw, overall completion: 44.0%". And this bug is similar to https://bugs.launchpad.net/starlingx/+bug/1833323 which is assigned to you. I think the following scenarios related to applying application should be addressed: 1. During applying application, the host reboot; 2. Forbidding the openstack command before re-applying application successfully. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Tue Jul 2 02:44:23 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Tue, 2 Jul 2019 02:44:23 +0000 Subject: [Starlingx-discuss] ipv6 support? Message-ID: Hi Matt, I noticed you made a patch of "ipv6 cluster networking support". Does that mean ipv6 is fully enabled in StarlingX now? When we bootstrap StarlingX, we can setup ipv6 address on OAM and provide docker registry with ipv6 support? Thanks. BR. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From forrest.zhao at intel.com Tue Jul 2 08:38:53 2019 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Tue, 2 Jul 2019 08:38:53 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> Message-ID: <6345119E91D5C843A93D64F498ACFA137452ED8E@SHSMSX101.ccr.corp.intel.com> Hi Bruce and TSC reviewers, Here are intents of "containerize OVS DPDK", 1. As StarlingX moves to containerization, most OpenStack components have been containerized. That includes OVS containerization, but OVS-DPDK is still running on host. It's better to containerize OVS/DPDK as well, to leverage benefits brought by containerization. 2. Currently, StarlingX supports OVS and OVS-DPDK. OVS is managed by openstack-helm, and running in container. But OVS-DPDK is managed by puppet and run directly on the host. Maintaining two implements and keeping them consistent cost more resources than maintaining just one implement. For example. If we want to make some changes(upgrade OVS version, enable some features), we need the changes at two places. It introduces much more upgrade/maintenance costs. "Containerize OVS DPDK" can eliminate such duplication and inconsistency. Thanks, Forrest From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 11:44 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the "Not yet discussed" features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo'd @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Tue Jul 2 08:56:42 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 2 Jul 2019 08:56:42 +0000 Subject: [Starlingx-discuss] Limitations on Multus/SRIOV CNI Plugins In-Reply-To: References: Message-ID: Hi all, The following 2 bugs have been reported to track the limitations 1&2: https://bugs.launchpad.net/starlingx/+bug/1835018 https://bugs.launchpad.net/starlingx/+bug/1835020 Limitation 3 should be posted on StarlingX guide on how to use MULTUS/SR-IOV CNI which is still missing. Just like below: To run SRIOV+DPDK, the pod need to request memory and mount hugepage-volume on correct host path as following: resources: requests: memory: 2Gi intel.com/pci_sriov_net_physnet0: 2 limits: memory: 2Gi intel.com/pci_sriov_net_physnet0: 2 volumeMounts: - name: hugepage-volume mountPath: /dev/hugepages volumes: - name: hugepage-volume hostPath: path: /dev/hugepages Best Regards, Xu, Chenjie From: Xu, Chenjie Sent: Thursday, June 27, 2019 4:32 PM To: Webster, Steven ; Khalil, Ghada ; Peters, Matt Cc: Zhao, Forrest ; Guo, Ruijing ; Le, Huifeng Subject: Limitations on Multus/SRIOV CNI Plugins Hi Steven, During my testing on Multus/SRIOV CNI Plugins, I have following findings: 1. Need sysadmin to set MAC address manually for VF. 2. The configuration for SR-IOV network device plugin has changed as following: https://github.com/intel/sriov-network-device-plugin#configurations For now StarlingX uses its own docker image and doesn't need to change the configuration. But in the future, this will be a bug when StarlingX updates the docker image to newer version. 3. This is not a bug but should be noticed: Normally, huge pages should be supported by kubernetes to run SRIOV+DPDK. And the pod needs to request huge pages as following: resources: requests: memory: 2Gi intel.com/pci_sriov_net_physnet0: 2 limits: hugepages-1Gi: 2Gi memory: 2Gi intel.com/pci_sriov_net_physnet0: 2 volumeMounts: - name: hugepage-volume mountPath: /dev/hugepages volumes: - name: hugepage-volume emptyDir: medium: HugePages However the kubernetes provided by StarlingX will: enable huge pages for non-openstack based worker node disable huge pages for openstack based worker node: https://opendev.org/starlingx/config/src/branch/master/puppet-manifests/src/modules/platform/manifests/kubernetes.pp#L118 But by my testing, the pod can still get huge pages in openstack based worker node on which kubernetes doesn't provide support for huge pages. And the pod need to request memory and mount hugepage-volume on correct host path as following: resources: requests: memory: 2Gi intel.com/pci_sriov_net_physnet0: 2 limits: memory: 2Gi intel.com/pci_sriov_net_physnet0: 2 volumeMounts: - name: hugepage-volume mountPath: /dev/hugepages volumes: - name: hugepage-volume hostPath: path: /dev/hugepages Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Tue Jul 2 11:53:50 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Tue, 2 Jul 2019 11:53:50 +0000 Subject: [Starlingx-discuss] a question about starlingx error handling behavior In-Reply-To: References: Message-ID: <210898B96CA058408C55992CCAD98676C101F633@ALA-MBD.corp.ad.wrs.com> Please perform both cases and for each - indicate what cables/interfaces were pulled - indicate what hosts the cables were pulled from - indicate approximate timestamp of when the cable was pulled - indicate approximate timestamp of when the cable was reinserted - Run 'collect all' and provide me access to the collect tarball Also, just to be sure ... please confirm that you are physically pulling the cable and not just ifdowning the interface. Eric. From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Sunday, June 30, 2019 9:45 PM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] a question about starlingx error handling behavior Importance: High Hi Eric, I am working on the LP 1815513. Based on my tests, if I unplug the cable of active controller for management network for a long time (for example, 30s), and then plug it, the whole system can recover after some reboots. But if I unplug the cable for a short time, and then plug it. The whole system can't recover. I need to lock/unlock controllers manually to bring the system back. So my questions are: 1. Is the behavior acceptable? (recover the system by manual lock/unlock operations) 2. If the answer is no for #1, we need the system to recover automatically. I am not familiar with internal maintenance logics, could you give me some hints? Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jul 2 12:23:19 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 2 Jul 2019 12:23:19 +0000 Subject: [Starlingx-discuss] Community Call (July 3, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A830D9@ALA-MBD.corp.ad.wrs.com> Hi everyone, we will be holding the Community Call tomorrow, at the usual time. Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190703T1400 From Matt.Peters at windriver.com Tue Jul 2 13:14:46 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Tue, 2 Jul 2019 13:14:46 +0000 Subject: [Starlingx-discuss] ipv6 support? In-Reply-To: References: Message-ID: Hello, Yes IPv6 is now supported. The only limitation is that dual-stack is not supported, so all networks must be IPv6 (with the exception of the pxeboot network). Please note that not all docker image registries support IPv6, therefore you have to use an IPv6 capable proxy, or deploy with a NAT64/DNS64 gateway. Regards, Matt From: "Wang, Yi C" Date: Monday, July 1, 2019 at 10:44 PM To: "Peters, Matt" Cc: "'starlingx-discuss at lists.starlingx.io'" Subject: [Starlingx-discuss] ipv6 support? Hi Matt, I noticed you made a patch of “ipv6 cluster networking support”. Does that mean ipv6 is fully enabled in StarlingX now? When we bootstrap StarlingX, we can setup ipv6 address on OAM and provide docker registry with ipv6 support? Thanks. BR. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Tue Jul 2 13:20:42 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Tue, 2 Jul 2019 13:20:42 +0000 Subject: [Starlingx-discuss] a question about starlingx error handling behavior In-Reply-To: <210898B96CA058408C55992CCAD98676C101F633@ALA-MBD.corp.ad.wrs.com> References: <210898B96CA058408C55992CCAD98676C101F633@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Eric, Thank you, Eric! I will collect all the information and get back to you soon. I confirm that I physically pulled the cable. Thanks. Yi From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Tuesday, July 2, 2019 7:54 PM To: Wang, Yi C Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] a question about starlingx error handling behavior Please perform both cases and for each - indicate what cables/interfaces were pulled - indicate what hosts the cables were pulled from - indicate approximate timestamp of when the cable was pulled - indicate approximate timestamp of when the cable was reinserted - Run 'collect all' and provide me access to the collect tarball Also, just to be sure ... please confirm that you are physically pulling the cable and not just ifdowning the interface. Eric. From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Sunday, June 30, 2019 9:45 PM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] a question about starlingx error handling behavior Importance: High Hi Eric, I am working on the LP 1815513. Based on my tests, if I unplug the cable of active controller for management network for a long time (for example, 30s), and then plug it, the whole system can recover after some reboots. But if I unplug the cable for a short time, and then plug it. The whole system can't recover. I need to lock/unlock controllers manually to bring the system back. So my questions are: 1. Is the behavior acceptable? (recover the system by manual lock/unlock operations) 2. If the answer is no for #1, we need the system to recover automatically. I am not familiar with internal maintenance logics, could you give me some hints? Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Tue Jul 2 13:31:21 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Tue, 2 Jul 2019 13:31:21 +0000 Subject: [Starlingx-discuss] ipv6 support? In-Reply-To: References: Message-ID: Matt, Many thanks for your detailed explanation! BR Yi From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, July 2, 2019 9:15 PM To: Wang, Yi C Cc: 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] ipv6 support? Hello, Yes IPv6 is now supported. The only limitation is that dual-stack is not supported, so all networks must be IPv6 (with the exception of the pxeboot network). Please note that not all docker image registries support IPv6, therefore you have to use an IPv6 capable proxy, or deploy with a NAT64/DNS64 gateway. Regards, Matt From: "Wang, Yi C" > Date: Monday, July 1, 2019 at 10:44 PM To: "Peters, Matt" > Cc: "'starlingx-discuss at lists.starlingx.io'" > Subject: [Starlingx-discuss] ipv6 support? Hi Matt, I noticed you made a patch of “ipv6 cluster networking support”. Does that mean ipv6 is fully enabled in StarlingX now? When we bootstrap StarlingX, we can setup ipv6 address on OAM and provide docker registry with ipv6 support? Thanks. BR. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Jul 2 13:35:54 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 2 Jul 2019 13:35:54 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/3 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FD0000@SHSMSX104.ccr.corp.intel.com> Agenda for 7/3 meeting: - stx 3.0 features proposal review (all) - Redfish support (https://storyboard.openstack.org/#!/story/2005861), Eric/Zhipeng - Python2to3 transition (https://wiki.openstack.org/wiki/StarlingX/Python2, story to be created), Austin - non-Openstack patch cleanup (remaining SB to be reviewed) - Ceph containerization (https://storyboard.openstack.org/#!/story/2005527) Tingjie - Support Kata container (https://storyboard.openstack.org/#!/story/2006145), Shuicheng - QAT support in Cinder & Glance Vivian - will track it in OpenStack sub-project. - systemd standardization (Story to be created). Saul/Marcela - Ceph test status report (Abraham/Fernando) - stx 2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin, Alex/Bin) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 3, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From yi.c.wang at intel.com Tue Jul 2 14:13:24 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Tue, 2 Jul 2019 14:13:24 +0000 Subject: [Starlingx-discuss] [docs] one system command was removed by design In-Reply-To: <3808363B39586544A6839C76CF81445EA1B803C4@ORSMSX104.amr.corp.intel.com> References: <3808363B39586544A6839C76CF81445EA1B803C4@ORSMSX104.amr.corp.intel.com> Message-ID: Mike, Here is a reference on how to create Calico policy in case customers want to add new rules. The resource can be applied by kubernetes command "kubectl apply -f xxx.yaml". https://docs.projectcalico.org/v3.1/reference/calicoctl/resources/globalnetworkpolicy Thanks. Yi From: Tullis, Michael L Sent: Monday, July 1, 2019 11:07 PM To: Wang, Yi C ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [docs] one system command was removed by design Thanks Yi. We'll discuss this in our upcoming meeting and will submit a PR to take care of this. -- Mike ________________________________ From: Wang, Yi C [yi.c.wang at intel.com] Sent: Sunday, June 30, 2019 6:12 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] [docs] one system command was removed by design Hi docs team, The command "system firewall-rules-install" was intentionally removed when doing the storyboad https://storyboard.openstack.org/#!/story/2005066 . Without this command, customers still can add their firewall rules by Calico policy. Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From Allain.Legacy at windriver.com Tue Jul 2 14:56:33 2019 From: Allain.Legacy at windriver.com (Legacy, Allain) Date: Tue, 2 Jul 2019 14:56:33 +0000 Subject: [Starlingx-discuss] system host-if-modify error In-Reply-To: <8557B550001AFB46A43A0CCC314BF85168772896@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85168772896@FMSMSX108.amr.corp.intel.com> Message-ID: <70A7408C6E1BFB41B192A929744D8523C1D29446@ALA-MBD.corp.ad.wrs.com> The commands related to creating and modifying interface was changed recently. Please refer to the mailing list post with subject: "Provisioning changes to host interface commands" which I have attached to this message. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Monday, July 01, 2019 10:14 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] system host-if-modify error Hi, I have been using the below command to set a SRIOV interface: $ system host-if-modify -m -n -N -p -c pci-sriov $ system host-if-modify -m 1500 -n sriov1 -N 5 -p physnet0 -c pci-sriov compute-0 38922809-dec1-4e55-9f58-5db4b0859ae5 Command works correctly from ISO 20190627 and older. But now I got the following error: system: error: unrecognized arguments: -p 47569880-3225-4a96-b897-b7bf1d114b8d Seems that there is an error in the structure or syntax of command, but -p flag and interface UUID are separated by other parameters. I also try to use -d flag and interface name instead of uuid but got the same error. Do you know if this SRIOV command have changed? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: -------------- next part -------------- An embedded message was scrubbed... From: "Ho, Teresa" Subject: [Starlingx-discuss] Provisioning changes to host interface commands Date: Tue, 11 Jun 2019 20:20:43 +0000 Size: 16639 URL: From juan.carlos.alonso at intel.com Tue Jul 2 16:27:38 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 2 Jul 2019 16:27:38 +0000 Subject: [Starlingx-discuss] system host-if-modify error In-Reply-To: <70A7408C6E1BFB41B192A929744D8523C1D29446@ALA-MBD.corp.ad.wrs.com> References: <8557B550001AFB46A43A0CCC314BF85168772896@FMSMSX108.amr.corp.intel.com> <70A7408C6E1BFB41B192A929744D8523C1D29446@ALA-MBD.corp.ad.wrs.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85168772C91@FMSMSX108.amr.corp.intel.com> Hello, Yes, now the commands are: $ system datanetwork-add physnet2 vlan $ system host-if-modify -m 1500 -n sriov0 -N 5 -c pci-sriov compute-0 eno1 $ system interface-datanetwork-assign compute-0 sriov0 physnet2 Then when I tried to create a network attached to physnet2, to create ports attached to this network I got the following error: controller-0:~$ openstack network create --mtu 1500 --provider-network-type vlan --provider-physical-network physnet2 sriov-net Error while executing command: BadRequestException: 400, Invalid input for operation: physical_network 'physnet2' unknown for VLAN provider network. I am not sure why this error, I create data network as vlan. Maybe should be a different type since it is created for SRIOV. Do you have any idea? Regards. Juan Carlos Alonso From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Tuesday, July 2, 2019 9:57 AM To: Alonso, Juan Carlos ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: system host-if-modify error The commands related to creating and modifying interface was changed recently. Please refer to the mailing list post with subject: "Provisioning changes to host interface commands" which I have attached to this message. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Monday, July 01, 2019 10:14 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] system host-if-modify error Hi, I have been using the below command to set a SRIOV interface: $ system host-if-modify -m -n -N -p -c pci-sriov $ system host-if-modify -m 1500 -n sriov1 -N 5 -p physnet0 -c pci-sriov compute-0 38922809-dec1-4e55-9f58-5db4b0859ae5 Command works correctly from ISO 20190627 and older. But now I got the following error: system: error: unrecognized arguments: -p 47569880-3225-4a96-b897-b7bf1d114b8d Seems that there is an error in the structure or syntax of command, but -p flag and interface UUID are separated by other parameters. I also try to use -d flag and interface name instead of uuid but got the same error. Do you know if this SRIOV command have changed? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From juan.carlos.alonso at intel.com Tue Jul 2 18:31:58 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 2 Jul 2019 18:31:58 +0000 Subject: [Starlingx-discuss] Data Base Purge Message-ID: <8557B550001AFB46A43A0CCC314BF85168772D02@FMSMSX108.amr.corp.intel.com> Hello, After create flavors, images, networks and launch instances with openstack commands, I need to "soft delete VMs and entries", but I am not sure what "soft delete" means. Does it means VMs and other entries should be deleted with openstack commands too? ( $openstack server delete vm-1) If not what is the mechanism to 'soft-delete' entries? After create and soft delete several instances I need to purge meta data table of soft delete instances (instances_metadata, instance_system, instance_info_cache, etc). Do you know what is the mechanism or commands to purge data bases to avoid size over? If this is an automatically action when delete entries, how can I validate it is actually happening? I will really appreciate your help! Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jul 2 19:23:27 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 2 Jul 2019 21:23:27 +0200 Subject: [Starlingx-discuss] Open Infrastructure Summit Shanghai CFP is closing soon Message-ID: <55D419B8-B998-4AD7-8220-11A6FEBBB529@gmail.com> Hi StarlingX Community, The Shanghai Summit Call for Presentations [1] deadline is TODAY, July 2 at 11:59 pm PT (July 3, 2019 at 15:00 China Standard Time)! Submit your presentations, panels, and Hands-on Workshops for the Open Infrastructure Summit [2] by the end of today, and join the global community in Shanghai, November 4-6, 2019. Sessions will be presented in both Mandarin and English, so you may submit [3] your presentation in either language. Tracks [4]: 5G, NFV & Edge AI, Machine Learning & HPC CI/CD Container Infrastructure Getting Started Hands-on Workshops Open Development Private & Hybrid Cloud Public Cloud Security Other Helpful Shanghai Summit & PTG Information-- * Register now [5] before the early bird registration deadline in early August (USD or RMB options available) * Apply for Travel Support [6] before August 8. More information here [7]. * Interested in sponsoring the Summit? [8]. * The content submission process for the Forum and Project Teams Gathering will be managed separately in the upcoming months. We look forward to your submissions! Cheers, Ashlee [1] https://cfp.openstack.org/ [2] https://www.openstack.org/summit/shanghai-2019/ [3] https://cfp.openstack.org/ [4] https://www.openstack.org/summit/shanghai-2019/summit-categories/ [5] https://www.openstack.org/summit/shanghai-2019/ [6] https://openstackfoundation.formstack.com/forms/travelsupportshanghai [7] https://www.openstack.org/summit/shanghai-2019/travel/ [8] https://www.openstack.org/summit/shanghai-2019/sponsors/ From scott.little at windriver.com Tue Jul 2 19:25:09 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 2 Jul 2019 15:25:09 -0400 Subject: [Starlingx-discuss] Out of date package versions Message-ID: <5bd90874-007f-18be-123d-c93a6f3f215f@windriver.com> A brief report showing where our package versions appear to have diverged from upstream, based on upstreams tags.   If upstream has issued a new version, we should adopt it in our packaging. Please check your packages and update the spec files and TIS_BASE_SRCREV as required. Scott git upstream tag STX ver stx/git/python-cinderclient 4.2.0 4.1.0 stx/git/horizon 15.1.0 14.0.0 stx/git/python-ironicclient 2.8.0 2.7.0 stx/git/python-keystoneauth 3.14.0 3.13.1 stx/git/python-magnumclient 2.13.0 2.12.0 stx/git/python-muranoclient 1.2.0 1.1.1 stx/git/python-novaclient 14.1.0 13.0.0 stx/git/python-openstackclient 3.19.0 3.18.0 stx/git/python-openstacksdk 0.31.1 0.25.0 stx/git/python-pankoclient 0.6.0 0.5.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Tue Jul 2 20:45:08 2019 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 2 Jul 2019 13:45:08 -0700 Subject: [Starlingx-discuss] Fwd: [Bug 1834116] [NEW] sysadmin user not locked out after 5 wrong password attempts In-Reply-To: <156175000040.31728.8909684393057514733.launchpad@soybean.canonical.com> References: <156175000040.31728.8909684393057514733.launchpad@soybean.canonical.com> Message-ID: <80de666d-017d-d9a2-2783-7ecf4f13eeac@linux.intel.com> Folks, I am looking into this launchpad [0], this appears to be related to pam_faillock module which would be included in the /etc/pam.d/ configuration files. When I search through the StarlingX repos, including of course the stx-integ/config-files/pam-config configuration, I could not find any mention of the faillock module before or after the sysadmin change. I am not exactly sure how the sysadmin change would affect pam configuration. I found the new StarlingX test that are under review current has mention of pam faillock (as I mention in the lauchpad), it's a manual setup, not something that is defaulted in the current code. Peng Peng mentions that faillock works with an older image (from May) that I don't have access to, if there is a copy of the ISO I can look at, I can validate the pam setup actuall includes the faillock module and to understand what's setting it and why the sysroot change would break it. Is there a different subsystem that I am not aware that is checking for Authentication failures and lockouts? [0] https://bugs.launchpad.net/bugs/1834116 Additional thoughts, pointers to code I am missing? Thanks Sau! -------- Forwarded Message -------- Subject: [Bug 1834116] [NEW] sysadmin user not locked out after 5 wrong password attempts Date: Fri, 28 Jun 2019 19:26:40 -0000 From: Launchpad Bug Tracker <1834116 at bugs.launchpad.net> Reply-To: Bug 1834116 <1834116 at bugs.launchpad.net> To: sgw at linux.intel.com Ghada Khalil (gkhalil) has assigned this bug to you for StarlingX: Brief Description ----------------- login as: sysadmin, after 5 wrong password attempt, system does not logout and still could login by correct password Severity -------- Major Steps to Reproduce ------------------ 1. 5 wrong password attempt 2. login by correct password TC-name: test_linux_user_lockout Expected Behavior ------------------ system lockout and can not login Actual Behavior ---------------- login success Reproducibility --------------- Reproducible System Configuration -------------------- Multi-node system Lab-name: wcp_63-66 Branch/Pull Time/Commit ----------------------- stx master as of 20190622T013000Z Last Pass --------- 20190503T013000Z Timestamp/Logs -------------- 2019-06-23 04:07:57,399] 792 INFO MainThread test_linux_user_password_aging.test_linux_user_lockout:: 1: Expecting to fail to login with invalid password, host:128.224.151.85, user:sysadmin, password:123 [2019-06-23 04:07:57,400] 710 INFO MainThread test_linux_user_password_aging.log_in_raw:: logging onto host:128.224.151.85 as user:sysadmin with password:123 After 5 times attempt [2019-06-23 04:08:12,134] 747 INFO MainThread test_linux_user_password_aging.log_in_raw:: Error, expecting to fail but actually logged in, host:128.224.151.85 as user:sysadmin with password:Li69nux* output before:, after: Last failed login: Sun Jun 23 04:08:11 UTC 2019 from 128.224.150.21 on ssh:notty There were 10 failed login attempts since the last successful login. Last login: Sun Jun 23 04:07:57 2019 from 128.224.150.21 /etc/motd.d/00-header:  WARNING: Unauthorized access to this system is forbidden and will be prosecuted by law. By accessing this system, you agree that your actions may be monitored if unauthorized usage is suspected. [?1034hcontroller-1:~$ Test Activity ------------- Regression Testing ** Affects: starlingx Importance: Medium Assignee: Saul Wold (sgw-starlingx) Status: Triaged ** Tags: stx.2.0 stx.regression stx.retestneeded stx.security -- sysadmin user not locked out after 5 wrong password attempts https://bugs.launchpad.net/bugs/1834116 You received this bug notification because you are a bug assignee. From sgw at linux.intel.com Tue Jul 2 21:00:58 2019 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 2 Jul 2019 14:00:58 -0700 Subject: [Starlingx-discuss] Fwd: [Bug 1834116] [NEW] sysadmin user not locked out after 5 wrong password attempts In-Reply-To: <80de666d-017d-d9a2-2783-7ecf4f13eeac@linux.intel.com> References: <156175000040.31728.8909684393057514733.launchpad@soybean.canonical.com> <80de666d-017d-d9a2-2783-7ecf4f13eeac@linux.intel.com> Message-ID: Of course, just after I sent this I saw a note from BartW about the pam_tally2 module, which does the same thing as pam_faillock. I did a local test enabling pam_faillock and it worked as expected. Seems the issue might be in pam_tally2 This may also be a duplicate of [0] which was seen before the sysadmin change. Sau! [0] https://bugs.launchpad.net/starlingx/+bug/1814345 On 7/2/19 1:45 PM, Saul Wold wrote: > > Folks, > > I am looking into this launchpad [0], this appears to be related to > pam_faillock module which would be included in the /etc/pam.d/ > configuration files. > > When I search through the StarlingX repos, including of course the > stx-integ/config-files/pam-config configuration, I could not find any > mention of the faillock module before or after the sysadmin change. I am > not exactly sure how the sysadmin change would affect pam configuration. > > I found the new StarlingX test that are under review current has mention > of pam faillock (as I mention in the lauchpad), it's a manual setup, not > something that is defaulted in the current code. > > Peng Peng mentions that faillock works with an older image (from May) > that I don't have access to, if there is a copy of the ISO I can look > at, I can validate the pam setup actuall includes the faillock module > and to understand what's setting it and why the sysroot change would > break it. > > Is there a different subsystem that I am not aware that is checking for > Authentication failures and lockouts? > > [0] https://bugs.launchpad.net/bugs/1834116 > > Additional thoughts, pointers to code I am missing? > > Thanks >   Sau! > > > -------- Forwarded Message -------- > Subject: [Bug 1834116] [NEW] sysadmin user not locked out after 5 wrong > password attempts > Date: Fri, 28 Jun 2019 19:26:40 -0000 > From: Launchpad Bug Tracker <1834116 at bugs.launchpad.net> > Reply-To: Bug 1834116 <1834116 at bugs.launchpad.net> > To: sgw at linux.intel.com > > Ghada Khalil (gkhalil) has assigned this bug to you for StarlingX: > > Brief Description > ----------------- > login as: sysadmin, after 5 wrong password attempt, system does not > logout and still could login by correct password > > Severity > -------- > Major > > > Steps to Reproduce > ------------------ > 1. 5 wrong password attempt > 2. login by correct password > > TC-name: test_linux_user_lockout > > Expected Behavior > ------------------ > system lockout and can not login > > > Actual Behavior > ---------------- > login success > > > Reproducibility > --------------- > Reproducible > > > System Configuration > -------------------- > Multi-node system > > > Lab-name: wcp_63-66 > > > Branch/Pull Time/Commit > ----------------------- > stx master as of 20190622T013000Z > > > Last Pass > --------- > 20190503T013000Z > > > Timestamp/Logs > -------------- > 2019-06-23 04:07:57,399] 792  INFO  MainThread > test_linux_user_password_aging.test_linux_user_lockout:: 1: Expecting to > fail to login with invalid password, host:128.224.151.85, user:sysadmin, > password:123 > > [2019-06-23 04:07:57,400] 710  INFO  MainThread > test_linux_user_password_aging.log_in_raw:: logging onto > host:128.224.151.85 as user:sysadmin with password:123 > > After 5 times attempt > > > [2019-06-23 04:08:12,134] 747  INFO  MainThread > test_linux_user_password_aging.log_in_raw:: Error, expecting to fail but > actually logged in,  host:128.224.151.85 as user:sysadmin with > password:Li69nux* > > output before:, after: Last failed login: Sun Jun 23 04:08:11 UTC 2019 > from 128.224.150.21 on ssh:notty > There were 10 failed login attempts since the last successful login. > Last login: Sun Jun 23 04:07:57 2019 from 128.224.150.21 > > /etc/motd.d/00-header: > >  > WARNING: Unauthorized access to this system is forbidden and will be > prosecuted by law. By accessing this system, you agree that your > actions may be monitored if unauthorized usage is suspected. > > [?1034hcontroller-1:~$ > > > Test Activity > ------------- > Regression Testing > > ** Affects: starlingx >      Importance: Medium >      Assignee: Saul Wold (sgw-starlingx) >          Status: Triaged > > > ** Tags: stx.2.0 stx.regression stx.retestneeded stx.security From maria.g.perez.ibarra at intel.com Tue Jul 2 22:23:05 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 2 Jul 2019 22:23:05 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 7/02/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190628T013000Z" from (link) ---------------------------------------------------------------------- Overall Results: Total = 423 Pass = 164 Fail = 8 Blocked = 2 Total executed = 174 Pass Rate = 94.25% ---------------------------------------------------------------------- Results per Domain: Regression - AIO-SX 23 PASS |1 FAIL|2 BLOCKED Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 13 PASS Regression - FM Regression - HA Regression - Heat 10 PASS Regression - Horizon 4 PASS Regression - Install and Config Regression - Maintenance Regression - Networking 62 PASS | 3 FAIL Regression - Nova 2 PASS Regression - Security 27 PASS | 2 FAIL Regression - Storage Regression - Inventory 23 PASS | 2 FAIL System Test --------------------------------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX : https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted : https://bugs.launchpad.net/starlingx/+bug/1833469 removing attributes from bash.log should not be possible : https://bugs.launchpad.net/starlingx/+bug/1833619 sysadmin user not locked out after 5 wrong password attempts : https://bugs.launchpad.net/starlingx/+bug/1834116 After pull data cable on the compute, no alarm has triggered : https://bugs.launchpad.net/starlingx/+bug/1834512 System account doesn't block after invalid login attempts : https://bugs.launchpad.net/starlingx/+bug/1814345 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Jul 2 23:14:47 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 2 Jul 2019 23:14:47 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190702 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-02 (link) Status: YELLOW ====================== Bare Metal environment ====================== AIO - Simplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 49 TCs Sanity-Platform 11 TCs ------------------------------ TOTAL: 64 TCs AIO - Duplex: Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - Local Storage (2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs Sanity-Platform 09 TCs ------------------------------ TOTAL: 65 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provision-Containers 01 TCs Sanity-OpenStack 52 TCs | 2 FAIL Sanity-Platform 05 TCs ------------------------------ TOTAL: 61 TCs =================== Virtual Environment =================== AIO - Simplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 49 TCs Sanity Platform 07 TCs ------------------------------ TOTAL: 60 TCs AIO - Duplex Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 51 TCs | 2 FAIL Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - Local Storage (2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs Standard - External Storage (2+2+2): Setup 03 TCs Provisioning 01 TCs Sanity OpenStack 52 TCs Sanity Platform 05 TCs ------------------------------ TOTAL: 61 TCs application-apply stx-openstack stuck at processing chart: osh-openstack-nginx-ports-control https://bugs.launchpad.net/starlingx/+bug/1834070 Containers: stx-openstack reapply fails on neutron https://bugs.launchpad.net/starlingx/+bug/1833718 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vivian.zhu at intel.com Wed Jul 3 00:00:52 2019 From: vivian.zhu at intel.com (Zhu, Vivian) Date: Wed, 3 Jul 2019 00:00:52 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/3 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FD0000@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FD0000@SHSMSX104.ccr.corp.intel.com> Message-ID: <371DF9A763E9F44F924F4A821FC070264D0E2859@SHSMSX105.ccr.corp.intel.com> Cindy, Yong, Bruce, Regarding to the QAT compression/decompression feature enabling on Openstack, do you have plan to create JIRA workstream to track? Thanks! - Vivian SSG OTC NST Storage Tel: (8621)61167437 -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, July 02, 2019 9:36 PM To: 'starlingx-discuss at lists.starlingx.io' ; 'Rowsell, Brent' ; Wold, Saul Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/3 Agenda for 7/3 meeting: - stx 3.0 features proposal review (all) - Redfish support (https://storyboard.openstack.org/#!/story/2005861), Eric/Zhipeng - Python2to3 transition (https://wiki.openstack.org/wiki/StarlingX/Python2, story to be created), Austin - non-Openstack patch cleanup (remaining SB to be reviewed) - Ceph containerization (https://storyboard.openstack.org/#!/story/2005527) Tingjie - Support Kata container (https://storyboard.openstack.org/#!/story/2006145), Shuicheng - QAT support in Cinder & Glance Vivian - will track it in OpenStack sub-project. - systemd standardization (Story to be created). Saul/Marcela - Ceph test status report (Abraham/Fernando) - stx 2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin, Alex/Bin) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 3, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Wed Jul 3 00:55:56 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 3 Jul 2019 00:55:56 +0000 Subject: [Starlingx-discuss] Out of date package versions In-Reply-To: <5bd90874-007f-18be-123d-c93a6f3f215f@windriver.com> References: <5bd90874-007f-18be-123d-c93a6f3f215f@windriver.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC256FF80@ALA-MBD.corp.ad.wrs.com> These can be deleted as they are no longer used. stx/git/python-magnumclient 2.13.0 2.12.0 stx/git/python-muranoclient 1.2.0 1.1.1 Brent From: Scott Little [mailto:scott.little at windriver.com] Sent: Tuesday, July 2, 2019 3:25 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Out of date package versions A brief report showing where our package versions appear to have diverged from upstream, based on upstreams tags. If upstream has issued a new version, we should adopt it in our packaging. Please check your packages and update the spec files and TIS_BASE_SRCREV as required. Scott git upstream tag STX ver stx/git/python-cinderclient 4.2.0 4.1.0 stx/git/horizon 15.1.0 14.0.0 stx/git/python-ironicclient 2.8.0 2.7.0 stx/git/python-keystoneauth 3.14.0 3.13.1 stx/git/python-magnumclient 2.13.0 2.12.0 stx/git/python-muranoclient 1.2.0 1.1.1 stx/git/python-novaclient 14.1.0 13.0.0 stx/git/python-openstackclient 3.19.0 3.18.0 stx/git/python-openstacksdk 0.31.1 0.25.0 stx/git/python-pankoclient 0.6.0 0.5.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Wed Jul 3 01:15:13 2019 From: yong.hu at intel.com (Hu, Yong) Date: Wed, 3 Jul 2019 01:15:13 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: <6345119E91D5C843A93D64F498ACFA137452ED8E@SHSMSX101.ccr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> <6345119E91D5C843A93D64F498ACFA137452ED8E@SHSMSX101.ccr.corp.intel.com> Message-ID: <10759DB0-B614-44F7-987D-D946AD46EB01@intel.com> Hi Forrest, Did you mean Network team has plan to commit this feature for Stx.3.0? If so, are we going forward with this spec/patch: https://review.opendev.org/655830, in which the last comment was made on May 5th? Regards, Yong On 02/07/2019, 1:42 AM, "Zhao, Forrest" > wrote: Hi Bruce and TSC reviewers, Here are intents of “containerize OVS DPDK”, 1. As StarlingX moves to containerization, most OpenStack components have been containerized. That includes OVS containerization, but OVS-DPDK is still running on host. It’s better to containerize OVS/DPDK as well, to leverage benefits brought by containerization. 2. Currently, StarlingX supports OVS and OVS-DPDK. OVS is managed by openstack-helm, and running in container. But OVS-DPDK is managed by puppet and run directly on the host. Maintaining two implements and keeping them consistent cost more resources than maintaining just one implement. For example. If we want to make some changes(upgrade OVS version, enable some features), we need the changes at two places. It introduces much more upgrade/maintenance costs. “Containerize OVS DPDK” can eliminate such duplication and inconsistency. Thanks, Forrest From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 11:44 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the “Not yet discussed” features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo’d @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From forrest.zhao at intel.com Wed Jul 3 01:57:32 2019 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Wed, 3 Jul 2019 01:57:32 +0000 Subject: [Starlingx-discuss] StarlingX 3.0 features In-Reply-To: <10759DB0-B614-44F7-987D-D946AD46EB01@intel.com> References: <9A85D2917C58154C960D95352B22818BD07786A8@fmsmsx123.amr.corp.intel.com> <6345119E91D5C843A93D64F498ACFA137452ED8E@SHSMSX101.ccr.corp.intel.com> <10759DB0-B614-44F7-987D-D946AD46EB01@intel.com> Message-ID: <6345119E91D5C843A93D64F498ACFA137452F33B@SHSMSX101.ccr.corp.intel.com> Yes, it’s a committed feature for STX 3.0 from networking team. The latest patch set of its spec at https://review.opendev.org/#/c/655830/ has addressed all open comments. We’ll ping reviewers to move forward. Thanks, Forrest From: Hu, Yong Sent: Wednesday, July 3, 2019 9:15 AM To: Zhao, Forrest ; Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 3.0 features Hi Forrest, Did you mean Network team has plan to commit this feature for Stx.3.0? If so, are we going forward with this spec/patch: https://review.opendev.org/655830, in which the last comment was made on May 5th? Regards, Yong On 02/07/2019, 1:42 AM, "Zhao, Forrest" > wrote: Hi Bruce and TSC reviewers, Here are intents of “containerize OVS DPDK”, 1. As StarlingX moves to containerization, most OpenStack components have been containerized. That includes OVS containerization, but OVS-DPDK is still running on host. It’s better to containerize OVS/DPDK as well, to leverage benefits brought by containerization. 2. Currently, StarlingX supports OVS and OVS-DPDK. OVS is managed by openstack-helm, and running in container. But OVS-DPDK is managed by puppet and run directly on the host. Maintaining two implements and keeping them consistent cost more resources than maintaining just one implement. For example. If we want to make some changes(upgrade OVS version, enable some features), we need the changes at two places. It introduces much more upgrade/maintenance costs. “Containerize OVS DPDK” can eliminate such duplication and inconsistency. Thanks, Forrest From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, June 27, 2019 11:44 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 3.0 features Here is a list of StarlingX 3.0 features that our team plans to work on. I would like to ask the TSC to please review the “Not yet discussed” features before closing on the 3.0 feature list. Thank you! brucej Work item TSC status Lead Status IA platform features Not yet discussed Abraham, Saul, Ada Most work is validation, new features are integrated as we adopt newer kernels over time. Real time features for industrial use cases are going into 5.x kernels and may (or may not) be back-portable. Containerize OVS DPDK AR Yong Forrest Not yet approved, Yong to get with Forrest and confirm intent Performance testing Not yet discussed Victor, Ada Proposal in progress FPGA accelerator support Push to 4.0 Abraham, Ada Too big for 3.0 but will likely need to start soon. FPGA hardware has been ordered OpenStack Train integration Approved Bruce (Dean) Continuous integration from OpenStack master Containerized Ceph Push to 4.0 Vivian Too big for 3.0 but will likely need to start soon Time Sensitive Networking Approved Forrest Spec in progress Kubernetes plugins for IA Partial Cindy Some reviews in progress, QAT approved, FPGA likely 4.0 Redfish Approved Cindy Spec in progress IOT device management Not yet discussed Abraham Demo’d @ Denver. POC / pathfinding work for a customer in progress, item likely too big for 3.0 SUSE build support & enablement Approved Abraham, Saul In progress, previously approved for 2.0 and continuing on Containerized OpenStack Clients Approved Dean? Nearly completed for 2.0, pushed to 3.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Wed Jul 3 02:11:22 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Wed, 3 Jul 2019 02:11:22 +0000 Subject: [Starlingx-discuss] [DOCS] Add Ironic deployment doc In-Reply-To: References: Message-ID: Hi docs team, The attachment is too big that the mail needs to be moderated. So I cancel the posting and share the recipe here: https://drive.google.com/file/d/1_HkqIR1tVqGv6SaA3pyiiG9_DL0MTADO/view?usp=sharing Mingyuan From: Qi, Mingyuan Sent: Tuesday, July 2, 2019 16:09 To: starlingx-discuss at lists.starlingx.io Subject: [DOCS] Add Ironic deployment doc Hi docs team, With the Ironic SB finished, I'd like to add a doc for standard deployment with ironic in starlingx. The SB is: https://storyboard.openstack.org/#!/story/2004760 The attachment is the recipe to enable an ironic node. I was trying to merge the contents to current doc, but seems the official standard deployment guide is still at "config_controller" age. https://docs.starlingx.io/deployment_guides/latest/controller_storage/index.html So please help to add this recipe to "Standard with Ironic" page if possible, thanks! Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Jul 3 03:17:08 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 3 Jul 2019 03:17:08 +0000 Subject: [Starlingx-discuss] [DOCS] Add Ironic deployment doc In-Reply-To: References: , Message-ID: <3808363B39586544A6839C76CF81445EA1B827AF@ORSMSX104.amr.corp.intel.com> Thanks Mingyuan for this submission. We'll submit a review to Gerrit and will include you on the watch list. Thx. -- Mike ________________________________ From: Qi, Mingyuan [mingyuan.qi at intel.com] Sent: Tuesday, July 02, 2019 7:11 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] [DOCS] Add Ironic deployment doc Hi docs team, The attachment is too big that the mail needs to be moderated. So I cancel the posting and share the recipe here: https://drive.google.com/file/d/1_HkqIR1tVqGv6SaA3pyiiG9_DL0MTADO/view?usp=sharing Mingyuan From: Qi, Mingyuan Sent: Tuesday, July 2, 2019 16:09 To: starlingx-discuss at lists.starlingx.io Subject: [DOCS] Add Ironic deployment doc Hi docs team, With the Ironic SB finished, I’d like to add a doc for standard deployment with ironic in starlingx. The SB is: https://storyboard.openstack.org/#!/story/2004760 The attachment is the recipe to enable an ironic node. I was trying to merge the contents to current doc, but seems the official standard deployment guide is still at “config_controller” age. https://docs.starlingx.io/deployment_guides/latest/controller_storage/index.html So please help to add this recipe to “Standard with Ironic” page if possible, thanks! Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From yizhoux.xu at intel.com Wed Jul 3 11:39:57 2019 From: yizhoux.xu at intel.com (Xu, YizhouX) Date: Wed, 3 Jul 2019 11:39:57 +0000 Subject: [Starlingx-discuss] memory reserved for vm not enough Message-ID: <8E7F30EFCB9B334AAA0491274BEDBE750104D5E2@SHSMSX105.ccr.corp.intel.com> Hi all: I'm testing pci-passthrough with a AIO stx node(iso version:20190607T142331Z), after install completed, I found that there's not enough memory reversed for my vms(server total memory is 64G,but only 7-8G can be allocated for vms), Checked node with `system host-memory-list controller-0` and find that hugepages was configured and I did't deply dpdk-ovs. Here are my questions: 1. Did stx turn on hugepage as default? Can this feature be tured off to get all the reserved memory for my vms (I don't need dpdk-ovs)? 2. If hugepage is necessary, refer to https://docs.openstack.org/nova/pike/admin/huge-pages.html ,I've Customized flavor for huge pages allocations with `openstack flavor set m1.large --property hw:mem_page_size=large` But it did't work (vm boot successfully , no memory allocated from reseved hugepages). Did I do it in right way? Where can I trace the error? Here 're the detail of my node: [wrsroot at controller-0 ~(keystone_admin)]$ system host-memory-list controller-0 +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_tota | app_hp | app_hp | app_hp_p | app_hp | app_hp | app_hp_pen | app_hp_use_1G | | | al(MiB) | rm(MiB) | il(MiB) | configured | size(M | total | avail | _reqd | l_4K | _total | _avail | ending_2 | _total | _avail | ding_1G | | | | | | | | iB) | | | | | _2M | _2M | M | _1G | _1G | | | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | 0 | 51308 | 11000 | 51308 | True | 1024 | 0 | 0 | None | 1789952 | 22158 | 22158 | None | 0 | 0 | None | True | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ [wrsroot at controller-0 ~(keystone_admin)]$ cat /proc/meminfo | grep Huge HugePages_Total: 22158 HugePages_Free: 22158 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Best Regards, Xu, YiZhou -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jul 3 12:37:19 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 3 Jul 2019 12:37:19 +0000 Subject: [Starlingx-discuss] memory reserved for vm not enough In-Reply-To: <8E7F30EFCB9B334AAA0491274BEDBE750104D5E2@SHSMSX105.ccr.corp.intel.com> References: <8E7F30EFCB9B334AAA0491274BEDBE750104D5E2@SHSMSX105.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FD1270@SHSMSX104.ccr.corp.intel.com> Wondering if Austin or Bin can provide some insight here. From: Xu, YizhouX [mailto:yizhoux.xu at intel.com] Sent: Wednesday, July 3, 2019 7:40 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] memory reserved for vm not enough Hi all: I'm testing pci-passthrough with a AIO stx node(iso version:20190607T142331Z), after install completed, I found that there's not enough memory reversed for my vms(server total memory is 64G,but only 7-8G can be allocated for vms), Checked node with `system host-memory-list controller-0` and find that hugepages was configured and I did't deply dpdk-ovs. Here are my questions: 1. Did stx turn on hugepage as default? Can this feature be tured off to get all the reserved memory for my vms (I don't need dpdk-ovs)? 2. If hugepage is necessary, refer to https://docs.openstack.org/nova/pike/admin/huge-pages.html ,I've Customized flavor for huge pages allocations with `openstack flavor set m1.large --property hw:mem_page_size=large` But it did't work (vm boot successfully , no memory allocated from reseved hugepages). Did I do it in right way? Where can I trace the error? Here 're the detail of my node: [wrsroot at controller-0 ~(keystone_admin)]$ system host-memory-list controller-0 +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_tota | app_hp | app_hp | app_hp_p | app_hp | app_hp | app_hp_pen | app_hp_use_1G | | | al(MiB) | rm(MiB) | il(MiB) | configured | size(M | total | avail | _reqd | l_4K | _total | _avail | ending_2 | _total | _avail | ding_1G | | | | | | | | iB) | | | | | _2M | _2M | M | _1G | _1G | | | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | 0 | 51308 | 11000 | 51308 | True | 1024 | 0 | 0 | None | 1789952 | 22158 | 22158 | None | 0 | 0 | None | True | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ [wrsroot at controller-0 ~(keystone_admin)]$ cat /proc/meminfo | grep Huge HugePages_Total: 22158 HugePages_Free: 22158 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Best Regards, Xu, YiZhou -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Wed Jul 3 12:45:35 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 3 Jul 2019 12:45:35 +0000 Subject: [Starlingx-discuss] memory reserved for vm not enough In-Reply-To: <8E7F30EFCB9B334AAA0491274BEDBE750104D5E2@SHSMSX105.ccr.corp.intel.com> References: <8E7F30EFCB9B334AAA0491274BEDBE750104D5E2@SHSMSX105.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC2570858@ALA-MBD.corp.ad.wrs.com> Add extra spec hw:mem_page_size=any to your flavor Brent From: Xu, YizhouX [mailto:yizhoux.xu at intel.com] Sent: Wednesday, July 3, 2019 7:40 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] memory reserved for vm not enough Hi all: I'm testing pci-passthrough with a AIO stx node(iso version:20190607T142331Z), after install completed, I found that there's not enough memory reversed for my vms(server total memory is 64G,but only 7-8G can be allocated for vms), Checked node with `system host-memory-list controller-0` and find that hugepages was configured and I did't deply dpdk-ovs. Here are my questions: 1. Did stx turn on hugepage as default? Can this feature be tured off to get all the reserved memory for my vms (I don't need dpdk-ovs)? 2. If hugepage is necessary, refer to https://docs.openstack.org/nova/pike/admin/huge-pages.html ,I've Customized flavor for huge pages allocations with `openstack flavor set m1.large --property hw:mem_page_size=large` But it did't work (vm boot successfully , no memory allocated from reseved hugepages). Did I do it in right way? Where can I trace the error? Here 're the detail of my node: [wrsroot at controller-0 ~(keystone_admin)]$ system host-memory-list controller-0 +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_tota | app_hp | app_hp | app_hp_p | app_hp | app_hp | app_hp_pen | app_hp_use_1G | | | al(MiB) | rm(MiB) | il(MiB) | configured | size(M | total | avail | _reqd | l_4K | _total | _avail | ending_2 | _total | _avail | ding_1G | | | | | | | | iB) | | | | | _2M | _2M | M | _1G | _1G | | | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | 0 | 51308 | 11000 | 51308 | True | 1024 | 0 | 0 | None | 1789952 | 22158 | 22158 | None | 0 | 0 | None | True | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ [wrsroot at controller-0 ~(keystone_admin)]$ cat /proc/meminfo | grep Huge HugePages_Total: 22158 HugePages_Free: 22158 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Best Regards, Xu, YiZhou -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Jul 3 12:49:41 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 3 Jul 2019 12:49:41 +0000 Subject: [Starlingx-discuss] memory reserved for vm not enough In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FD1270@SHSMSX104.ccr.corp.intel.com> References: <8E7F30EFCB9B334AAA0491274BEDBE750104D5E2@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD1270@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Yizhou: You can use 'system host-memory-modify' command to adjust. memory config. For example : 1) lock AIO controller-0 2) system host-memory-modify controller-0 -2M -f vswitch. 3) unlock AIO controller-0. When controller-0 is available again. the memory will be adjusted Thanks. BR Austin Sun. From: Xie, Cindy Sent: Wednesday, July 3, 2019 8:37 PM To: Xu, YizhouX ; 'starlingx-discuss at lists.starlingx.io' ; Yang, Bin ; Sun, Austin Subject: RE: [Starlingx-discuss] memory reserved for vm not enough Wondering if Austin or Bin can provide some insight here. From: Xu, YizhouX [mailto:yizhoux.xu at intel.com] Sent: Wednesday, July 3, 2019 7:40 PM To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] memory reserved for vm not enough Hi all: I'm testing pci-passthrough with a AIO stx node(iso version:20190607T142331Z), after install completed, I found that there's not enough memory reversed for my vms(server total memory is 64G,but only 7-8G can be allocated for vms), Checked node with `system host-memory-list controller-0` and find that hugepages was configured and I did't deply dpdk-ovs. Here are my questions: 1. Did stx turn on hugepage as default? Can this feature be tured off to get all the reserved memory for my vms (I don't need dpdk-ovs)? 2. If hugepage is necessary, refer to https://docs.openstack.org/nova/pike/admin/huge-pages.html ,I've Customized flavor for huge pages allocations with `openstack flavor set m1.large --property hw:mem_page_size=large` But it did't work (vm boot successfully , no memory allocated from reseved hugepages). Did I do it in right way? Where can I trace the error? Here 're the detail of my node: [wrsroot at controller-0 ~(keystone_admin)]$ system host-memory-list controller-0 +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_tota | app_hp | app_hp | app_hp_p | app_hp | app_hp | app_hp_pen | app_hp_use_1G | | | al(MiB) | rm(MiB) | il(MiB) | configured | size(M | total | avail | _reqd | l_4K | _total | _avail | ending_2 | _total | _avail | ding_1G | | | | | | | | iB) | | | | | _2M | _2M | M | _1G | _1G | | | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | 0 | 51308 | 11000 | 51308 | True | 1024 | 0 | 0 | None | 1789952 | 22158 | 22158 | None | 0 | 0 | None | True | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ [wrsroot at controller-0 ~(keystone_admin)]$ cat /proc/meminfo | grep Huge HugePages_Total: 22158 HugePages_Free: 22158 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Best Regards, Xu, YiZhou -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Jul 2 13:19:56 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 2 Jul 2019 13:19:56 +0000 Subject: [Starlingx-discuss] Notes from Distro.openstack call July 2 2019 Message-ID: <9A85D2917C58154C960D95352B22818BD077B6FA@fmsmsx123.amr.corp.intel.com> 7/2 meeting * Dean to run this meeting starting next week (following week, Dean is on vacation next week). Yong to run the meeting next week. * Nova placement helm chart https://review.opendev.org/#/c/662229/ - still pending? * Zhipeng - please attend the Tuesday Helm meeting and push this review * It would be good to respond to Chris' comments, other reviewers may not look closer until they see that as resolved * Final helm override status (Gerry) - expect all reviews out this week * ETA end of this week - 1 review merged, 1 open, two more to go * Orphan instance cleanup: https://review.openstack.org/#/c/627765/ * Eric F - I put this in front of people (Sean, Mel, Matt), and they're actively reviewing it. Some comments have been posted. There's some concern as to how much of it is needed versus reusing "the existing periodic for handling deleted instances". (I don't know anything about this, I'm just the messenger here.) * Yongli - Re-writing the whole patch as per feedback, review has WF-1 pending the re-write. Mel wants this done the way it was discussed at the PTG. * NUMA topology: https://review.openstack.org/#/c/621476/ * Eric F - I added this to the runway queue [1] under the assumption that it is code complete. If it's not, please let me know. * Yong Li - the reviews are moving forward, addressing feedback. * On a somewhat related topic, do you know if Artom plans to continue his NUMA live migration work? If not, would it be appropriate for someone else to take it on? * Eric F - I think Artom is traveling right now. But looking at the series [2], the bottom patch [3] was last touched on June 12th, and appears to be near the top of his queue. So I wouldn't be worried just yet. * Rebase to the new Nova branch: * We need a volunteer to do a build to test the f/stein.2 branch with the latest Nova. Tests should include creating a couple VMs and making sure they can be started. ? Zhipeng has a related bug (1827692) and should be able to do this. AR Yong to check with him. If not, please find someone else who can. ? https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2 * This update should be done by whoever is responsible for the change. It requires updating the image directives file to reference the new branch as the PROJECT_REF value: https://opendev.org/starlingx/upstream/src/branch/master/openstack/python-nova/centos/stx-nova.stable_docker_image#L5 * Instructions on building images can be found here: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Image_Build_Command * If I wanted to test the build of this updated directives file, I can use the latest CENGN-built base and wheels, and run: ? BUILD_STREAM=stable ? BRANCH=master ? CENTOS_BASE=starlingx/stx-centos:${BRANCH}-${BUILD_STREAM}-latest ? WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-${BUILD_STREAM}-wheels.tar ? time $MY_REPO/build-tools/build-docker-images/build-stx-images.sh --stream ${BUILD_STREAM} --base ${CENTOS_BASE} --wheels ${WHEELS} --only stx-nova * Shuquan - we had a patch on live migration merged last week. Getting some +2's on two reviews. Working with the Horizon PTL and reached agreement. Shuqan to provide more details. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lilong-neu at neusoft.com Wed Jul 3 10:29:30 2019 From: lilong-neu at neusoft.com (lilong-neu at neusoft.com) Date: Wed, 3 Jul 2019 10:29:30 +0000 Subject: [Starlingx-discuss] [StarlingX] Regardings magnum service on latest stx version Message-ID: <317AB83A10F93A4895BD9A9AE0931BE597E14D@MPS-SYMBX03.neusoft.internal> Hello StarlingX guys, we have confirmed magnum service at latest stx version many times which derectly failed to run. so we supposed that the new version of starlingx does not support the magnum service and may be replaced by other services. According to the current situation, if magnum is no longer maintained in the new version of starlingx, it is recommended to close the bug: https://bugs.launchpad.net/starlingx/+bug/1820324 Could you give some suggestions? others: we tried to configure magnum in the starlingx (2019/06/26) environment, according to the official website(https://docs.openstack.org/magnum/latest/install/install-rdo.html#top), after installation When the related service is prompted, there is no corresponding download resource. The log is as follows: --------------------- controller-0:~$ sudo yum install openstack-magnum-api openstack-magnum-conductor python-magnumclient Password: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile base | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 No package openstack-magnum-api available. No package openstack-magnum-conductor available. Nothing to do --------------------- ---------------------------------------- Neusoft Corporation Neusoft Group (Dalian) Co., Ltd. No. 901 Huangpu Road, Dalian 116085, PRC Website: www.neusoft.com Mobile: (86) 15840916693 Tel:(86 0411) 8483 2794 E-mail: lilong-neu at neusoft.com --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jul 3 13:56:51 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 3 Jul 2019 13:56:51 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 7/3 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FD23AD@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 7/3 meeting: - stx 3.0 features proposal review (all) - Redfish support (https://storyboard.openstack.org/#!/story/2005861), Eric/Zhipeng, spec is already uploaded and ready for review. Encourge folks to review it. Gating for 3.0 - Python2to3 transition (https://wiki.openstack.org/wiki/StarlingX/Python2, story: https://storyboard.openstack.org/#!/story/2006158), Austin, Gating for 3.0 Austin update: analyzed 2.0 ISO and found there are still ~300 RPM packages has Python code, out of ~1100 packages. so seems like only 88 rpms needs some action (upgrade or replacement or more analysis). Saul: suggest only focus on those RPMs used by flocks instead of whole system. Is it feasible to get CentOS 7.6 python3 clean? Clarification of goal: shall we figure out the Python3 solution for CentOS 7.6 (instead of CentOS 8)? Brent question: the packages not Python3 compliant, need analysis for where they come from and how they are used in STX? analysis: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx - non-Openstack patch cleanup (remaining SB to be reviewed), defer this to 4.0 - Ceph containerization (https://storyboard.openstack.org/#!/story/2005527) Tingjie spec updated: https://review.opendev.org/#/c/656371/8/specs/2019.03/approved/containerization-2005527-containerized-ceph-deployment-and-provision.rst, effort and task break down is WIP. We can review the spec and workload next week. no Ceph upgrade will be required before Ceph containerization story merged into STX. - Support Kata container (https://storyboard.openstack.org/#!/story/2006145), Shuicheng Evaluating Kata container enabling in STX and will report out to the team next week. Some issues in K8s for integration. Spec will be required. Shall be in 4.0 scope. - QAT support in Cinder & Glance Vivian - will track it in OpenStack sub-project. - systemd standardization (Story to be created). Saul/Marcela. 19 new OpenSuse out of ~50 packages, the systemd has warnings. Looking at what needs to be done for those 19 and will create SB and tasks for build/installation/validation etc. Can start for 3.0 but a gating feature. - Ceph test status report (Abraham/Fernando) test status tracking: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1145711595 22 out of 26 tests are passing. - stx 2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin, Alex/Bin) 1831635: incomplete and waiting for log. Cannot reproduce. 1833738: bob is working on helm-chart override issue and there are several similar issue associated. 1830736: Tingjie to work w/ Martin to accelerate the progress. 1832854: defer to Brent to decide the priority 1814345: 1834116 is duplicate to this one. Haitao to look into, talk to Saul to understand the info from Saul. 1827258 : Bin root-cause it but needs test support from submitter. Send email to Numan in WR to ask test resource to support. - Opens (all) - none -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 3, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Brent.Rowsell at windriver.com Wed Jul 3 14:24:58 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 3 Jul 2019 14:24:58 +0000 Subject: [Starlingx-discuss] [StarlingX] Regardings magnum service on latest stx version References: <317AB83A10F93A4895BD9A9AE0931BE597E14D@MPS-SYMBX03.neusoft.internal> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC2570BFF@ALA-MBD.corp.ad.wrs.com> Magnum is not supported/integrated in stx2.0 Brent From: lilong-neu at neusoft.com [mailto:lilong-neu at neusoft.com] Sent: Wednesday, July 3, 2019 6:30 AM To: starlingx-discuss at lists.starlingx.io Cc: shuicheng.lin at intel.com; hai.tao.wang at intel.com; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Su Yang >; zhaos at neusoft.com; cindy.xie at intel.com; 张志国 >; zhaos at neusoft.com; wanghejun at neusoft.com Subject: [Starlingx-discuss] [StarlingX] Regardings magnum service on latest stx version Hello StarlingX guys, we have confirmed magnum service at latest stx version many times which derectly failed to run. so we supposed that the new version of starlingx does not support the magnum service and may be replaced by other services. According to the current situation, if magnum is no longer maintained in the new version of starlingx, it is recommended to close the bug: https://bugs.launchpad.net/starlingx/+bug/1820324 Could you give some suggestions? others: we tried to configure magnum in the starlingx (2019/06/26) environment, according to the official website(https://docs.openstack.org/magnum/latest/install/install-rdo.html#top), after installation When the related service is prompted, there is no corresponding download resource. The log is as follows: --------------------- controller-0:~$ sudo yum install openstack-magnum-api openstack-magnum-conductor python-magnumclient Password: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile base | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 No package openstack-magnum-api available. No package openstack-magnum-conductor available. Nothing to do --------------------- ---------------------------------------- Neusoft Corporation Neusoft Group (Dalian) Co., Ltd. No. 901 Huangpu Road, Dalian 116085, PRC Website: www.neusoft.com Mobile: (86) 15840916693 Tel:(86 0411) 8483 2794 E-mail: lilong-neu at neusoft.com --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jul 3 14:55:35 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 3 Jul 2019 14:55:35 +0000 Subject: [Starlingx-discuss] Community Call (July 3, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A84FBA@ALA-MBD.corp.ad.wrs.com> Notes/actions from today's meeting... - [brucej / bill] Use of Launchpad Blueprints for feature tracking - discussed in last week's TSC & Release Team meetings - Bruce recommending to use Blueprints to maintain our backlog list - it meets the needs of maintaining the TSC's backlog - could it replace StoryBoard? - it seems not (historically, StoryBoard was created to replace/augment Blueprint) - we need to work out some process flow questions - like how does something transition from being in a Blueprint to a StoryBoard - Brent questioned why we wouldn't just go to Launchpads for stories, if we're going to go to Blueprint - we generally seem to agree that it'd be *nice* if we could centralize everything in Launchapd (Bugs & Blueprints), but Blueprints don't seem to be able to replace StoryBoard for what we're using it for - Dean reiterated that the TSC is happy to let the Release Team make the recommendation on this, Brent agreed - ACTION: release team make the recommendation in the next TSC meeting - defect trend: https://docs.google.com/spreadsheets/d/1DZZgqrCIL6wxv51_yFBk6Lfmtf1AqPD6z7e5hEs3prU/edit#gid=300550657 - currently sitting at 138 vs. a fcast of 130 - let's talk about how to de-gate a bug - the TL needs to sign off on the de-gate, as agreed previously - ACTION: Bill to send request to domain owners to go through this process - Deployment Docs (Bart) - current state is that wikis are being updated, docs aren't keeping pace - even the most accurate docs are no longer accurate - Docs Gerrit queue: https://review.opendev.org/#/q/project:starlingx/docs,n,z - ACTION: Michael Tullis provide a readout on the current state of affairs for the various documents - open actions from previous meetings... - ACTION: Numan & Ada to sort out how aggregate regression reporting will be done (manual & automated - they'll tackle this after Numan's back next week - ACTION: Frank update on the forecast for the Docker image list - they opened a launchpad (https://bugs.launchpad.net/starlingx/+bug/1834504), it's currently not gating 2.0 for Al - if time permits, we'll look at it later in the release - instructions are in the LP - ACTION: Bill start checking if any 'new' people emails are going unresponded - yes, a few are, Bill to follow up - ACTION: Scott & Dean to talk about the mechanics for big files - Scott was looking into something, Bill will follow up w/ him - ACTION: Frank to talk to CENGN about getting sufficient space (pending any other parameters from Scott) - Frank has reached out, waiting on a response - ACTION: Dean find out what our options for increasing per mail size limit - pending - ACTION: Bill check with Ian about the logistics/timing of a mid-cycle meeting - will provide update next week - ACTION: Bruce (or Doc team) let us know if there is such a thing as stx.1.0 install docs - Yes - if you look at https://docs.starlingx.io/deployment_guides/index.html or https://docs.starlingx.io/installation_guide/index.html you will see that the deployment and installation documents are versioned - "Current" is the latest active release (stx.1.0) and "latest" is the upcoming release (stx.2.0). The Operation Guides are also versioned similarly. - Bill to check if this still needs to be sent - ACTION: Bruce to find out when the OpenStack Helm meeting is so we can represent (the changes haven't been pushed up there yet) - From wiki.openstack.org/wiki/Openstack-helm: Meetings Every Tuesday @ 3PM UTC, #openstack-meeting-4 - Yong to relay this info to the person who needs it - ACTION: Bill to chase the owners of incomplete bugs so we can reduce the set down to a very small number (then it's not a big deal whether we close or de-prioritize) - the chasing will start imminently - there are currently 29 such bugs (https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.status=Incomplete) - ACTION: Bill to socialize with the owners that the oldest bugs should be the first to be removed from the gating list - will include in gating bug chasing - ACTION: Bill to reach out to the domain/team owners to provide incoming/outgoing data for the resolution forecast - this was done, it's factored into the current forecast - ACTION: Bill follow up on status of bitergia changes - ansible-playbook repos: done (https://gitlab.com/Bitergia/c/OSF/support/issues/26) - adding github repos: done (https://gitlab.com/Bitergia/c/OSF/support/issues/25) - auto add starlingx repos: open (https://gitlab.com/Bitergia/c/OSF/support/issues/27) - Dean reminded that we are part of a trial and that our input is important to the decision on OpenStack adoption - Bart: no way to look at statistics for reviewers, doesn't seem to have been added yet - ACTION: Bill to follow up on this w/ Bart/Thierry - ACTION: Numan/Yang arrange an automation framweork info session for the Community (in a few weeks after Yang's vacation) -----Original Message----- From: Zvonar, Bill Sent: Tuesday, July 2, 2019 8:23 AM To: starlingx-discuss at lists.starlingx.io Subject: Community Call (July 3, 2019) Hi everyone, we will be holding the Community Call tomorrow, at the usual time. Please feel free to add topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190703T1400 From yi.c.wang at intel.com Wed Jul 3 15:34:21 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Wed, 3 Jul 2019 15:34:21 +0000 Subject: [Starlingx-discuss] a question about starlingx error handling behavior In-Reply-To: References: <210898B96CA058408C55992CCAD98676C101F633@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Eric, I retested it. And here is all the information you requested. I uploaded them to my google drive. Below is the link. https://drive.google.com/open?id=1LiCgPz5iS3SApb0Em8ZyA56lryAU-3Vm I conducted two tests. For case 1, the system can recover. For case 2, it can't. Case 1: pull the active controller management cable for longer than 30s, and then reinsert it Case 2: pull the active controller management cable for less than 30s, and then reinsert it For case 2, since controller-1 was shown as "offline" on controller-0. "collect all" can't get the information of controller-1. So I copied the whole folder "/var/log" of controller-1. It is included in the shared zip package. If you need more information, let me know. Thanks for your help again! Thanks. Yi From: Wang, Yi C Sent: Tuesday, July 2, 2019 9:21 PM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] a question about starlingx error handling behavior Hi Eric, Thank you, Eric! I will collect all the information and get back to you soon. I confirm that I physically pulled the cable. Thanks. Yi From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Tuesday, July 2, 2019 7:54 PM To: Wang, Yi C > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] a question about starlingx error handling behavior Please perform both cases and for each - indicate what cables/interfaces were pulled - indicate what hosts the cables were pulled from - indicate approximate timestamp of when the cable was pulled - indicate approximate timestamp of when the cable was reinserted - Run 'collect all' and provide me access to the collect tarball Also, just to be sure ... please confirm that you are physically pulling the cable and not just ifdowning the interface. Eric. From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Sunday, June 30, 2019 9:45 PM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] a question about starlingx error handling behavior Importance: High Hi Eric, I am working on the LP 1815513. Based on my tests, if I unplug the cable of active controller for management network for a long time (for example, 30s), and then plug it, the whole system can recover after some reboots. But if I unplug the cable for a short time, and then plug it. The whole system can't recover. I need to lock/unlock controllers manually to bring the system back. So my questions are: 1. Is the behavior acceptable? (recover the system by manual lock/unlock operations) 2. If the answer is no for #1, we need the system to recover automatically. I am not familiar with internal maintenance logics, could you give me some hints? Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Wed Jul 3 19:33:34 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 3 Jul 2019 19:33:34 +0000 Subject: [Starlingx-discuss] system host-if-modify error In-Reply-To: <8557B550001AFB46A43A0CCC314BF85168772C91@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85168772896@FMSMSX108.amr.corp.intel.com> <70A7408C6E1BFB41B192A929744D8523C1D29446@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85168772C91@FMSMSX108.amr.corp.intel.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85168773199@FMSMSX108.amr.corp.intel.com> FYI... Seems that it is an issue. A LP was opened: https://bugs.launchpad.net/starlingx/+bug/1835115 Regards. Juan Carlos Alonso From: Alonso, Juan Carlos Sent: Tuesday, July 2, 2019 11:28 AM To: Legacy, Allain ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: system host-if-modify error Hello, Yes, now the commands are: $ system datanetwork-add physnet2 vlan $ system host-if-modify -m 1500 -n sriov0 -N 5 -c pci-sriov compute-0 eno1 $ system interface-datanetwork-assign compute-0 sriov0 physnet2 Then when I tried to create a network attached to physnet2, to create ports attached to this network I got the following error: controller-0:~$ openstack network create --mtu 1500 --provider-network-type vlan --provider-physical-network physnet2 sriov-net Error while executing command: BadRequestException: 400, Invalid input for operation: physical_network 'physnet2' unknown for VLAN provider network. I am not sure why this error, I create data network as vlan. Maybe should be a different type since it is created for SRIOV. Do you have any idea? Regards. Juan Carlos Alonso From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Tuesday, July 2, 2019 9:57 AM To: Alonso, Juan Carlos >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: system host-if-modify error The commands related to creating and modifying interface was changed recently. Please refer to the mailing list post with subject: "Provisioning changes to host interface commands" which I have attached to this message. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Monday, July 01, 2019 10:14 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] system host-if-modify error Hi, I have been using the below command to set a SRIOV interface: $ system host-if-modify -m -n -N -p -c pci-sriov $ system host-if-modify -m 1500 -n sriov1 -N 5 -p physnet0 -c pci-sriov compute-0 38922809-dec1-4e55-9f58-5db4b0859ae5 Command works correctly from ISO 20190627 and older. But now I got the following error: system: error: unrecognized arguments: -p 47569880-3225-4a96-b897-b7bf1d114b8d Seems that there is an error in the structure or syntax of command, but -p flag and interface UUID are separated by other parameters. I also try to use -d flag and interface name instead of uuid but got the same error. Do you know if this SRIOV command have changed? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From cristopher.j.lemus.contreras at intel.com Wed Jul 3 19:43:09 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Wed, 3 Jul 2019 19:43:09 +0000 Subject: [Starlingx-discuss] About auto re-apply of stx-openstack Message-ID: Hello All, Do we have a list of what can trigger an auto re-apply of stx-openstack? Right now, we are aware that when we do a lock/unlock of the standby controller, this will trigger a re-apply. Is this expected? I think that this behavior was recently added, a re-apply wasn’t triggered one or two weeks ago. What other conditions can cause the re-apply? This will help us to determine how can we improve sanity execution and what to expect when we analyze the results. Thanks in advance! Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Wed Jul 3 19:58:22 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Wed, 3 Jul 2019 19:58:22 +0000 Subject: [Starlingx-discuss] a question about starlingx error handling behavior In-Reply-To: References: <210898B96CA058408C55992CCAD98676C101F633@ALA-MBD.corp.ad.wrs.com> Message-ID: <210898B96CA058408C55992CCAD98676C101FBE1@ALA-MBD.corp.ad.wrs.com> OK, give me some time to fit this analysis into my existing task list. I'll be in touch before the end of the week. From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Wednesday, July 03, 2019 11:34 AM To: MacDonald, Eric Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] a question about starlingx error handling behavior Importance: High Hi Eric, I retested it. And here is all the information you requested. I uploaded them to my google drive. Below is the link. https://drive.google.com/open?id=1LiCgPz5iS3SApb0Em8ZyA56lryAU-3Vm I conducted two tests. For case 1, the system can recover. For case 2, it can't. Case 1: pull the active controller management cable for longer than 30s, and then reinsert it Case 2: pull the active controller management cable for less than 30s, and then reinsert it For case 2, since controller-1 was shown as "offline" on controller-0. "collect all" can't get the information of controller-1. So I copied the whole folder "/var/log" of controller-1. It is included in the shared zip package. If you need more information, let me know. Thanks for your help again! Thanks. Yi From: Wang, Yi C Sent: Tuesday, July 2, 2019 9:21 PM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] a question about starlingx error handling behavior Hi Eric, Thank you, Eric! I will collect all the information and get back to you soon. I confirm that I physically pulled the cable. Thanks. Yi From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Tuesday, July 2, 2019 7:54 PM To: Wang, Yi C > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] a question about starlingx error handling behavior Please perform both cases and for each - indicate what cables/interfaces were pulled - indicate what hosts the cables were pulled from - indicate approximate timestamp of when the cable was pulled - indicate approximate timestamp of when the cable was reinserted - Run 'collect all' and provide me access to the collect tarball Also, just to be sure ... please confirm that you are physically pulling the cable and not just ifdowning the interface. Eric. From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Sunday, June 30, 2019 9:45 PM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] a question about starlingx error handling behavior Importance: High Hi Eric, I am working on the LP 1815513. Based on my tests, if I unplug the cable of active controller for management network for a long time (for example, 30s), and then plug it, the whole system can recover after some reboots. But if I unplug the cable for a short time, and then plug it. The whole system can't recover. I need to lock/unlock controllers manually to bring the system back. So my questions are: 1. Is the behavior acceptable? (recover the system by manual lock/unlock operations) 2. If the answer is no for #1, we need the system to recover automatically. I am not familiar with internal maintenance logics, could you give me some hints? Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Jul 3 20:37:27 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 3 Jul 2019 20:37:27 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190703 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-03 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Jul 3 21:03:37 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 3 Jul 2019 21:03:37 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 7/3/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B82CCF@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Jul 3 21:07:25 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 3 Jul 2019 14:07:25 -0700 Subject: [Starlingx-discuss] Python2 -> Python3 Message-ID: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> Folks, There has been some discussion about what it means for StarlingX to be Python3. I would like to clarify my thoughts and propose a direction that will work without creating too much work and risk. The current proposal seems to be to completely convert the base CentOS7.6 system level python to use python3, this carries a high risk factor as changing out all system-level python code could have a cascade effect on system functionality and additional dependencies. While python2 support is due to be depreciated in January from Python.org, RHEL will continue to support python2 for RHEL/CentOS7 as they have to give their customers Long Term support. A better solution would be to build python3 and the associated requirements from the existing RHEL EPEL (Extra Packages for Enterprise Linux) Source RPMs repo and install them into the ISO. This version correctly installs in a segregated directory tree. Another option would be to delay the actual python2 conversion to StarlingX 4.0, the OpenStack Train release will still support python2. There is still work that is needed beyond the conversion of the python code itself to things like RPM specfiles data and other source code (such as, C code that has #includes of python2.7). It's not clear to me how much functional testing with python3 has occurred for the flock beyond what Dean has started with devstack. Sau! From fungi at yuggoth.org Wed Jul 3 21:22:27 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 3 Jul 2019 21:22:27 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> Message-ID: <20190703212226.j7dmbvfjazaghuzn@yuggoth.org> On 2019-07-03 14:07:25 -0700 (-0700), Saul Wold wrote: [...] > Another option would be to delay the actual python2 conversion to > StarlingX 4.0, the OpenStack Train release will still support > python2. [...] Also the expectation is OpenStack will have the Python 3 from CentOS 8.x in its tested runtimes set for the "U" release. This should work 99.999% the same as RHEL 8.x. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtroyer at gmail.com Wed Jul 3 21:55:21 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 3 Jul 2019 16:55:21 -0500 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> Message-ID: <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> On 7/3/19 4:07 PM, Saul Wold wrote: > The current proposal seems to be to completely convert the base > CentOS7.6 system level python to use python3, this carries a high risk > factor as changing out all system-level python code could have a cascade > effect on system functionality and additional dependencies. While Changing the distro/system Python version out from under the rest of the distro seems like an enormous time sink, much less a significant reliability risk. > A better solution would be to build python3 and the associated > requirements from the existing RHEL EPEL (Extra Packages for Enterprise > Linux) Source RPMs repo and install them into the ISO. This version > correctly installs in a segregated directory tree. We would probably want to run a significant subset of the upstream OpenStack testing on this combination as it is not (AFAIK) tested there. But this is true of any runtime + distro combination that is not in the fairly short list of combinations that upstream OpenStack actively tests. > Another option would be to delay the actual python2 conversion to > StarlingX 4.0, the OpenStack Train release will still support python2. One downside to this is it leaves us no margin to defer the change again, this is our second chance as it were. OpenStack U (as of now) is likely to drop py2 support as a guarantee across-the-board. > There is still work that is needed beyond the conversion of the python > code itself to things like RPM specfiles data and other source code > (such as, C code that has #includes of python2.7). It's not clear to me > how much functional testing with python3 has occurred for the flock > beyond what Dean has started with devstack. I managed to get the fault services running on py3, sysinv fell over during the dbsync in my quick post-PTG trial run. That is as far as I took it. Anyone who wants to try can pick out the local.conf I posted [0] dt [0] http://paste.openstack.org/show/753844/ -- Dean Troyer dtroyer at gmail.com From yong.hu at intel.com Thu Jul 4 03:25:09 2019 From: yong.hu at intel.com (Yong Hu) Date: Wed, 3 Jul 2019 20:25:09 -0700 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> Message-ID: <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? In my view the best solution is to wait for CentOS 8.0 :-) On 03/07/2019 2:55 PM, Dean Troyer wrote: > On 7/3/19 4:07 PM, Saul Wold wrote: >> The current proposal seems to be to completely convert the base >> CentOS7.6 system level python to use python3, this carries a high risk >> factor as changing out all system-level python code could have a >> cascade effect on system functionality and additional dependencies. While > > Changing the distro/system Python version out from under the rest of the > distro seems like an enormous time sink, much less a significant > reliability risk. > >> A better solution would be to build python3 and the associated >> requirements from the existing RHEL EPEL (Extra Packages for >> Enterprise Linux) Source RPMs repo and install them into the ISO. This >> version correctly installs in a segregated directory tree. > > We would probably want to run a significant subset of the upstream > OpenStack testing on this combination as it is not (AFAIK) tested there. >  But this is true of any runtime + distro combination that is not in > the fairly short list of combinations that upstream OpenStack actively > tests. > >> Another option would be to delay the actual python2 conversion to >> StarlingX 4.0, the OpenStack Train release will still support python2. > > One downside to this is it leaves us no margin to defer the change > again, this is our second chance as it were.  OpenStack U (as of now) is > likely to drop py2 support as a guarantee across-the-board. > >> There is still work that is needed beyond the conversion of the python >> code itself to things like RPM specfiles data and other source code >> (such as, C code that has #includes of python2.7). It's not clear to >> me how much functional testing with python3 has occurred for the flock >> beyond what Dean has started with devstack. > > I managed to get the fault services running on py3, sysinv fell over > during the dbsync in my quick post-PTG trial run.  That is as far as I > took it.  Anyone who wants to try can pick out the local.conf I posted [0] > > dt > > [0] http://paste.openstack.org/show/753844/ > From cindy.xie at intel.com Thu Jul 4 03:36:51 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 4 Jul 2019 03:36:51 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> Austin, Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. Thanks. - cindy -----Original Message----- From: Yong Hu [mailto:yong.hu at intel.com] Sent: Thursday, July 4, 2019 11:25 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? In my view the best solution is to wait for CentOS 8.0 :-) On 03/07/2019 2:55 PM, Dean Troyer wrote: > On 7/3/19 4:07 PM, Saul Wold wrote: >> The current proposal seems to be to completely convert the base >> CentOS7.6 system level python to use python3, this carries a high >> risk factor as changing out all system-level python code could have a >> cascade effect on system functionality and additional dependencies. >> While > > Changing the distro/system Python version out from under the rest of > the distro seems like an enormous time sink, much less a significant > reliability risk. > >> A better solution would be to build python3 and the associated >> requirements from the existing RHEL EPEL (Extra Packages for >> Enterprise Linux) Source RPMs repo and install them into the ISO. >> This version correctly installs in a segregated directory tree. > > We would probably want to run a significant subset of the upstream > OpenStack testing on this combination as it is not (AFAIK) tested there. >  But this is true of any runtime + distro combination that is not in > the fairly short list of combinations that upstream OpenStack actively > tests. > >> Another option would be to delay the actual python2 conversion to >> StarlingX 4.0, the OpenStack Train release will still support python2. > > One downside to this is it leaves us no margin to defer the change > again, this is our second chance as it were.  OpenStack U (as of now) > is likely to drop py2 support as a guarantee across-the-board. > >> There is still work that is needed beyond the conversion of the >> python code itself to things like RPM specfiles data and other source >> code (such as, C code that has #includes of python2.7). It's not >> clear to me how much functional testing with python3 has occurred for >> the flock beyond what Dean has started with devstack. > > I managed to get the fault services running on py3, sysinv fell over > during the dbsync in my quick post-PTG trial run.  That is as far as I > took it.  Anyone who wants to try can pick out the local.conf I posted > [0] > > dt > > [0] http://paste.openstack.org/show/753844/ > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From austin.sun at intel.com Thu Jul 4 03:42:48 2019 From: austin.sun at intel.com (Sun, Austin) Date: Thu, 4 Jul 2019 03:42:48 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Cindy: Yes. we will do it and update sheet. Thanks. BR Austin Sun. -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, July 4, 2019 11:37 AM To: Hu, Yong ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 Austin, Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. Thanks. - cindy -----Original Message----- From: Yong Hu [mailto:yong.hu at intel.com] Sent: Thursday, July 4, 2019 11:25 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? In my view the best solution is to wait for CentOS 8.0 :-) On 03/07/2019 2:55 PM, Dean Troyer wrote: > On 7/3/19 4:07 PM, Saul Wold wrote: >> The current proposal seems to be to completely convert the base >> CentOS7.6 system level python to use python3, this carries a high >> risk factor as changing out all system-level python code could have a >> cascade effect on system functionality and additional dependencies. >> While > > Changing the distro/system Python version out from under the rest of > the distro seems like an enormous time sink, much less a significant > reliability risk. > >> A better solution would be to build python3 and the associated >> requirements from the existing RHEL EPEL (Extra Packages for >> Enterprise Linux) Source RPMs repo and install them into the ISO. >> This version correctly installs in a segregated directory tree. > > We would probably want to run a significant subset of the upstream > OpenStack testing on this combination as it is not (AFAIK) tested there. >  But this is true of any runtime + distro combination that is not in > the fairly short list of combinations that upstream OpenStack actively > tests. > >> Another option would be to delay the actual python2 conversion to >> StarlingX 4.0, the OpenStack Train release will still support python2. > > One downside to this is it leaves us no margin to defer the change > again, this is our second chance as it were.  OpenStack U (as of now) > is likely to drop py2 support as a guarantee across-the-board. > >> There is still work that is needed beyond the conversion of the >> python code itself to things like RPM specfiles data and other source >> code (such as, C code that has #includes of python2.7). It's not >> clear to me how much functional testing with python3 has occurred for >> the flock beyond what Dean has started with devstack. > > I managed to get the fault services running on py3, sysinv fell over > during the dbsync in my quick post-PTG trial run.  That is as far as I > took it.  Anyone who wants to try can pick out the local.conf I posted > [0] > > dt > > [0] http://paste.openstack.org/show/753844/ > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Thu Jul 4 04:16:08 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 3 Jul 2019 21:16:08 -0700 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> Message-ID: On 7/3/19 8:36 PM, Xie, Cindy wrote: > Austin, > Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: > > In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. > More specifically, are they available in the EPEL repo or someplace else? Sau! > Thanks. - cindy > > -----Original Message----- > From: Yong Hu [mailto:yong.hu at intel.com] > Sent: Thursday, July 4, 2019 11:25 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? > > In my view the best solution is to wait for CentOS 8.0 :-) > > > On 03/07/2019 2:55 PM, Dean Troyer wrote: >> On 7/3/19 4:07 PM, Saul Wold wrote: >>> The current proposal seems to be to completely convert the base >>> CentOS7.6 system level python to use python3, this carries a high >>> risk factor as changing out all system-level python code could have a >>> cascade effect on system functionality and additional dependencies. >>> While >> >> Changing the distro/system Python version out from under the rest of >> the distro seems like an enormous time sink, much less a significant >> reliability risk. >> >>> A better solution would be to build python3 and the associated >>> requirements from the existing RHEL EPEL (Extra Packages for >>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>> This version correctly installs in a segregated directory tree. >> >> We would probably want to run a significant subset of the upstream >> OpenStack testing on this combination as it is not (AFAIK) tested there. >>  But this is true of any runtime + distro combination that is not in >> the fairly short list of combinations that upstream OpenStack actively >> tests. >> >>> Another option would be to delay the actual python2 conversion to >>> StarlingX 4.0, the OpenStack Train release will still support python2. >> >> One downside to this is it leaves us no margin to defer the change >> again, this is our second chance as it were.  OpenStack U (as of now) >> is likely to drop py2 support as a guarantee across-the-board. >> >>> There is still work that is needed beyond the conversion of the >>> python code itself to things like RPM specfiles data and other source >>> code (such as, C code that has #includes of python2.7). It's not >>> clear to me how much functional testing with python3 has occurred for >>> the flock beyond what Dean has started with devstack. >> >> I managed to get the fault services running on py3, sysinv fell over >> during the dbsync in my quick post-PTG trial run.  That is as far as I >> took it.  Anyone who wants to try can pick out the local.conf I posted >> [0] >> >> dt >> >> [0] http://paste.openstack.org/show/753844/ >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Eric.MacDonald at windriver.com Thu Jul 4 11:49:16 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Thu, 4 Jul 2019 11:49:16 +0000 Subject: [Starlingx-discuss] a question about starlingx error handling behavior In-Reply-To: References: <210898B96CA058408C55992CCAD98676C101F633@ALA-MBD.corp.ad.wrs.com> Message-ID: <210898B96CA058408C55992CCAD98676C101FCC7@ALA-MBD.corp.ad.wrs.com> Hi Yi, Seems like your issue is a duplicate of another LP that both myself and one of my colleagues are (currently) working on a fix for. https://bugs.launchpad.net/starlingx/+bug/1815969 The above LP requires the following LP that I'm working on a fix for before it can be delivered. https://bugs.launchpad.net/starlingx/+bug/1835268 Please retest once the above 2 LP's have updates delivered against them. Eric. From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Wednesday, July 03, 2019 11:34 AM To: MacDonald, Eric Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] a question about starlingx error handling behavior Importance: High Hi Eric, I retested it. And here is all the information you requested. I uploaded them to my google drive. Below is the link. https://drive.google.com/open?id=1LiCgPz5iS3SApb0Em8ZyA56lryAU-3Vm I conducted two tests. For case 1, the system can recover. For case 2, it can't. Case 1: pull the active controller management cable for longer than 30s, and then reinsert it Case 2: pull the active controller management cable for less than 30s, and then reinsert it For case 2, since controller-1 was shown as "offline" on controller-0. "collect all" can't get the information of controller-1. So I copied the whole folder "/var/log" of controller-1. It is included in the shared zip package. If you need more information, let me know. Thanks for your help again! Thanks. Yi From: Wang, Yi C Sent: Tuesday, July 2, 2019 9:21 PM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] a question about starlingx error handling behavior Hi Eric, Thank you, Eric! I will collect all the information and get back to you soon. I confirm that I physically pulled the cable. Thanks. Yi From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Tuesday, July 2, 2019 7:54 PM To: Wang, Yi C > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] a question about starlingx error handling behavior Please perform both cases and for each - indicate what cables/interfaces were pulled - indicate what hosts the cables were pulled from - indicate approximate timestamp of when the cable was pulled - indicate approximate timestamp of when the cable was reinserted - Run 'collect all' and provide me access to the collect tarball Also, just to be sure ... please confirm that you are physically pulling the cable and not just ifdowning the interface. Eric. From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Sunday, June 30, 2019 9:45 PM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] a question about starlingx error handling behavior Importance: High Hi Eric, I am working on the LP 1815513. Based on my tests, if I unplug the cable of active controller for management network for a long time (for example, 30s), and then plug it, the whole system can recover after some reboots. But if I unplug the cable for a short time, and then plug it. The whole system can't recover. I need to lock/unlock controllers manually to bring the system back. So my questions are: 1. Is the behavior acceptable? (recover the system by manual lock/unlock operations) 2. If the answer is no for #1, we need the system to recover automatically. I am not familiar with internal maintenance logics, could you give me some hints? Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong.lc at inspur.com Thu Jul 4 12:38:04 2019 From: gaosong.lc at inspur.com (=?gb2312?B?U29uZyBHYW8gc29uZyAouN/LySk=?=) Date: Thu, 4 Jul 2019 12:38:04 +0000 Subject: [Starlingx-discuss] Help needed to add patch to openstackclient Message-ID: <35a91ae813ed423e8b20b61191b2d4ae@inspur.com> Hi Folks: What is the steps to add patch to openstackclient for stx.1.0. Currently we have some problems with the openstack cli, and already find solutions, but stuck in the commit steps. Any help will be appreciated! Best Regards Song.Gao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3603 bytes Desc: not available URL: From Al.Bailey at windriver.com Thu Jul 4 13:13:10 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 4 Jul 2019 13:13:10 +0000 Subject: [Starlingx-discuss] Help needed to add patch to openstackclient In-Reply-To: <35a91ae813ed423e8b20b61191b2d4ae@inspur.com> References: <35a91ae813ed423e8b20b61191b2d4ae@inspur.com> Message-ID: Currently the manifest for starlingx openstackclient builds from stable/stein. https://opendev.org/starlingx/manifest/src/branch/master/default.xml#L51 So if your fix is merged in vanilla openstackclient, and back-ported to the stable stein branch, it will get picked up automatically. If you need to temporarily diverge from vanilla openstack, we typically request that the gerrit review be submitted to the openstack community, and then we can temporarily merge it as a patch until the two environments are able to become in sync. We do that with openstack-helm https://opendev.org/starlingx/upstream/src/branch/master/openstack/openstack-helm Al From: Song Gao song (高松) [mailto:gaosong.lc at inspur.com] Sent: Thursday, July 04, 2019 8:38 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Help needed to add patch to openstackclient Hi Folks: What is the steps to add patch to openstackclient for stx.1.0. Currently we have some problems with the openstack cli, and already find solutions, but stuck in the commit steps. Any help will be appreciated! Best Regards Song.Gao -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong.lc at inspur.com Thu Jul 4 13:27:30 2019 From: gaosong.lc at inspur.com (=?gb2312?B?U29uZyBHYW8gc29uZyAouN/LySk=?=) Date: Thu, 4 Jul 2019 13:27:30 +0000 Subject: [Starlingx-discuss] =?gb2312?b?tPC4tDogSGVscCBuZWVkZWQgdG8gYWRk?= =?gb2312?b?IHBhdGNoIHRvIG9wZW5zdGFja2NsaWVudA==?= In-Reply-To: References: <35265f2406eaca448745d61525d8b9ef@sslemail.net> Message-ID: Thanks for your timely response! But, per starlingx 2018.10 how the openstackclient package built, is it built from the repo hosted in starlingx/upstream? And if it’s right, I can make ti straight to commit to upstream. 发件人: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] 发送时间: 2019年7月4日 21:13 收件人: Song Gao song (高松) ; starlingx-discuss at lists.starlingx.io 主题: RE: Help needed to add patch to openstackclient Currently the manifest for starlingx openstackclient builds from stable/stein. https://opendev.org/starlingx/manifest/src/branch/master/default.xml#L51 So if your fix is merged in vanilla openstackclient, and back-ported to the stable stein branch, it will get picked up automatically. If you need to temporarily diverge from vanilla openstack, we typically request that the gerrit review be submitted to the openstack community, and then we can temporarily merge it as a patch until the two environments are able to become in sync. We do that with openstack-helm https://opendev.org/starlingx/upstream/src/branch/master/openstack/openstack -helm Al From: Song Gao song (高松) [mailto:gaosong.lc at inspur.com] Sent: Thursday, July 04, 2019 8:38 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Help needed to add patch to openstackclient Hi Folks: What is the steps to add patch to openstackclient for stx.1.0. Currently we have some problems with the openstack cli, and already find solutions, but stuck in the commit steps. Any help will be appreciated! Best Regards Song.Gao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3603 bytes Desc: not available URL: From Al.Bailey at windriver.com Thu Jul 4 14:02:18 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 4 Jul 2019 14:02:18 +0000 Subject: [Starlingx-discuss] Help needed to add patch to openstackclient In-Reply-To: References: <35265f2406eaca448745d61525d8b9ef@sslemail.net> Message-ID: My mistake, my response was based on the current codebase and release. I don’t know what the procedure is for porting or supporting fixes in older STX releases. Al From: Song Gao song (高松) [mailto:gaosong.lc at inspur.com] Sent: Thursday, July 04, 2019 9:27 AM To: Bailey, Henry Albert (Al); starlingx-discuss at lists.starlingx.io Subject: 答复: Help needed to add patch to openstackclient Thanks for your timely response! But, per starlingx 2018.10 how the openstackclient package built, is it built from the repo hosted in starlingx/upstream? And if it’s right, I can make ti straight to commit to upstream. 发件人: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] 发送时间: 2019年7月4日 21:13 收件人: Song Gao song (高松) ; starlingx-discuss at lists.starlingx.io 主题: RE: Help needed to add patch to openstackclient Currently the manifest for starlingx openstackclient builds from stable/stein. https://opendev.org/starlingx/manifest/src/branch/master/default.xml#L51 So if your fix is merged in vanilla openstackclient, and back-ported to the stable stein branch, it will get picked up automatically. If you need to temporarily diverge from vanilla openstack, we typically request that the gerrit review be submitted to the openstack community, and then we can temporarily merge it as a patch until the two environments are able to become in sync. We do that with openstack-helm https://opendev.org/starlingx/upstream/src/branch/master/openstack/openstack-helm Al From: Song Gao song (高松) [mailto:gaosong.lc at inspur.com] Sent: Thursday, July 04, 2019 8:38 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Help needed to add patch to openstackclient Hi Folks: What is the steps to add patch to openstackclient for stx.1.0. Currently we have some problems with the openstack cli, and already find solutions, but stuck in the commit steps. Any help will be appreciated! Best Regards Song.Gao -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong.lc at inspur.com Thu Jul 4 14:20:46 2019 From: gaosong.lc at inspur.com (=?gb2312?B?U29uZyBHYW8gc29uZyAouN/LySk=?=) Date: Thu, 4 Jul 2019 14:20:46 +0000 Subject: [Starlingx-discuss] Help needed to add patch to openstackclient Message-ID: <0f8eb375bf2a4c6cb472a021dd6c11d0@inspur.com> It’s ok! Waiting for the other developers to reply. 发件人: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] 发送时间: 2019年7月4日 22:02 收件人: Song Gao song (高松) ; starlingx-discuss at lists.starlingx.io 主题: RE: Help needed to add patch to openstackclient My mistake, my response was based on the current codebase and release. I don’t know what the procedure is for porting or supporting fixes in older STX releases. Al From: Song Gao song (高松) [mailto:gaosong.lc at inspur.com] Sent: Thursday, July 04, 2019 9:27 AM To: Bailey, Henry Albert (Al); starlingx-discuss at lists.starlingx.io Subject: 答复: Help needed to add patch to openstackclient Thanks for your timely response! But, per starlingx 2018.10 how the openstackclient package built, is it built from the repo hosted in starlingx/upstream? And if it’s right, I can make ti straight to commit to upstream. 发件人: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] 发送时间: 2019年7月4日 21:13 收件人: Song Gao song (高松) >; starlingx-discuss at lists.starlingx.io 主题: RE: Help needed to add patch to openstackclient Currently the manifest for starlingx openstackclient builds from stable/stein. https://opendev.org/starlingx/manifest/src/branch/master/default.xml#L51 So if your fix is merged in vanilla openstackclient, and back-ported to the stable stein branch, it will get picked up automatically. If you need to temporarily diverge from vanilla openstack, we typically request that the gerrit review be submitted to the openstack community, and then we can temporarily merge it as a patch until the two environments are able to become in sync. We do that with openstack-helm https://opendev.org/starlingx/upstream/src/branch/master/openstack/openstack -helm Al From: Song Gao song (高松) [mailto:gaosong.lc at inspur.com] Sent: Thursday, July 04, 2019 8:38 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Help needed to add patch to openstackclient Hi Folks: What is the steps to add patch to openstackclient for stx.1.0. Currently we have some problems with the openstack cli, and already find solutions, but stuck in the commit steps. Any help will be appreciated! Best Regards Song.Gao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3603 bytes Desc: not available URL: From Bill.Zvonar at windriver.com Thu Jul 4 14:32:23 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 4 Jul 2019 14:32:23 +0000 Subject: [Starlingx-discuss] First Contact SIG (July 4, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A85632@ALA-MBD.corp.ad.wrs.com> Apologies for forgetting to send a reminder out before today's call. Today we discussed the historical responsiveness to questions on the mailing list, focusing on those who are looking for help & are more 'new' than others. The results for May are captured on the etherpad [0] under the heading "Mailing List Responsiveness". We'll discuss in next week's community call, comments are welcome here or on the etherpad. Bill... [0] https://etherpad.openstack.org/p/stx-first-contact From dtroyer at gmail.com Thu Jul 4 14:54:08 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 4 Jul 2019 09:54:08 -0500 Subject: [Starlingx-discuss] =?utf-8?b?562U5aSNOiBIZWxwIG5lZWRlZCB0byBh?= =?utf-8?q?dd_patch_to_openstackclient?= In-Reply-To: References: <35265f2406eaca448745d61525d8b9ef@sslemail.net> Message-ID: On 7/4/19 8:27 AM, Song Gao song (高松) wrote: > Thanks for your timely response! > > But, per starlingx 2018.10 how the openstackclient package built, is it > built from the repo hosted in starlingx/upstream? IIRC the 2018.10 build is based on OSC's stable/pike branch, which is now out of maintenance. OSC is designed to be usable from master with every cloud going back to Juno or so. The issue will be if it needs to be co-installed with other Python dependencies. If you can run OSC from another system or from a virtual env to should be able to use the current 3.19.0 release with StarlingX. > And if it’s right, I can make ti straight to commit to upstream. Is this an issue that has been fixed upstream? dt -- Dean Troyer dtroyer at gmail.com From gaosong.lc at inspur.com Thu Jul 4 15:16:06 2019 From: gaosong.lc at inspur.com (=?utf-8?B?U29uZyBHYW8gc29uZyAo6auY5p2+KQ==?=) Date: Thu, 4 Jul 2019 15:16:06 +0000 Subject: [Starlingx-discuss] =?utf-8?b?562U5aSNOiBbbGlzdHMuc3Rhcmxpbmd4?= =?utf-8?b?Lmlv5Luj5Y+RXVJlOiAg562U5aSNOiBIZWxwIG5lZWRlZCB0byBhZGQgcGF0?= =?utf-8?q?ch_to_openstackclient?= In-Reply-To: References: <35265f2406eaca448745d61525d8b9ef@sslemail.net> Message-ID: <9738ad90d55f4bb0914d6c4117ca2b0f@inspur.com> Yes, it has been fixed, but cannot be directly used because maybe the related novaclient or cinderclient project need to upgrade at the same time. It seems that the only solution is to add patch to the starlingx/upstream r2018.10 branch, isn't it? > -----The Original----- >project: [lists.starlingx.io] Re: [Starlingx-discuss] RE: Help needed to add > patch to openstackclient > > On 7/4/19 8:27 AM, Song Gao song wrote: > > Thanks for your timely response! > > > > But, per starlingx 2018.10 how the openstackclient package built, is > > it built from the repo hosted in starlingx/upstream? > > IIRC the 2018.10 build is based on OSC's stable/pike branch, which is now out > of maintenance. > > OSC is designed to be usable from master with every cloud going back to Juno > or so. The issue will be if it needs to be co-installed with other Python > dependencies. If you can run OSC from another system or from a virtual env > to should be able to use the current 3.19.0 release with StarlingX. > > > And if it’s right, I can make ti straight to commit to upstream. > > Is this an issue that has been fixed upstream? > > dt > > -- > Dean Troyer > dtroyer at gmail.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3603 bytes Desc: not available URL: From dtroyer at gmail.com Thu Jul 4 15:42:30 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 4 Jul 2019 10:42:30 -0500 Subject: [Starlingx-discuss] =?utf-8?b?562U5aSNOiBbbGlzdHMuc3Rhcmxpbmd4?= =?utf-8?b?Lmlv5Luj5Y+RXVJlOiAg562U5aSNOiBIZWxwIG5lZWRlZCB0byBhZGQgcGF0?= =?utf-8?q?ch_to_openstackclient?= In-Reply-To: <9738ad90d55f4bb0914d6c4117ca2b0f@inspur.com> References: <35265f2406eaca448745d61525d8b9ef@sslemail.net> <9738ad90d55f4bb0914d6c4117ca2b0f@inspur.com> Message-ID: <4e7c694b-461e-bc87-b416-3011f8c8590f@gmail.com> On 7/4/19 10:16 AM, Song Gao song (高松) wrote: > Yes, it has been fixed, but cannot be directly used because maybe the related novaclient or cinderclient project need to upgrade at the same time. > It seems that the only solution is to add patch to the starlingx/upstream r2018.10 branch, isn't it? If you need to continue with that release, yes you will need to backport whatever patches you need. pike is on 'extended maintenance' upstream so nothing that is not a CVE will get fixed there now. If the fix also depends on other client libs you will need to bring those down too. Is this fix required for the internal operations of StarlingX? ie, is it the puppet or other internal code that needs it, or is it for your administrative uses that you need it? If it is for admin use, you can install OSC 3.19.0 into a virtual env, even on a controller if necessary, and use it directly. dt -- Dean Troyer dtroyer at gmail.com From maria.g.perez.ibarra at intel.com Thu Jul 4 21:41:35 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 4 Jul 2019 21:41:35 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 7/04/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190628T013000Z" from (link) ---------------------------------------------------------------------- Overall Results: Total = 423 Pass = 185 Fail = 12 Blocked = 17 Total executed = 214 Pass Rate = 86.44% ---------------------------------------------------------------------- Results per Domain: Regression - AIO-SX 23 PASS |1 FAIL|2 BLOCKED Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 13 PASS Regression - FM Regression - HA Regression - Heat 12 PASS | 1 BLOCKED Regression - Horizon 4 PASS Regression - Install and Config Regression - Maintenance Regression - Networking 75 PASS | 7 FAIL | 7 BLOCKED Regression - Nova 2 PASS Regression - Security 27 PASS | 2 FAIL Regression - Storage Regression - Inventory 23 PASS | 2 FAIL System Test 6 PASS | 7 BLOCKED --------------------------------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX : https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted : https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered : https://bugs.launchpad.net/starlingx/+bug/1834512 System account doesn't block after invalid login attempts : https://bugs.launchpad.net/starlingx/+bug/1814345 Cannot create instances with SRIOV port : https://bugs.launchpad.net/starlingx/+bug/1835318 Instance cannot create with network driver e1000 and rtl8139 : https://bugs.launchpad.net/starlingx/+bug/1835300 After stopping neutron-dhcp-agent service no alarm generated: https://bugs.launchpad.net/starlingx/+bug/1835440 Containers: lock_host failed on a host with config_drive VM : https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot : https://bugs.launchpad.net/starlingx/+bug/1832047 Traceback and abort on live migration with cpu realtime : https://bugs.launchpad.net/starlingx/+bug/1834077 virsh only listing one volume, even though there was an additional volume attached after instantiation : https://bugs.launchpad.net/starlingx/+bug/1834194 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host : https://bugs.launchpad.net/starlingx/+bug/1834255 Device UUID is missing when boot up VM with block device : https://bugs.launchpad.net/starlingx/+bug/1835282 stx-openstack apply takes longer time when lock and unlock on standby controller : https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install : https://bugs.launchpad.net/starlingx/+bug/1834245 Total Bugs: 15 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Jul 4 22:10:02 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 4 Jul 2019 22:10:02 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190704 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-04 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yizhoux.xu at intel.com Fri Jul 5 01:16:12 2019 From: yizhoux.xu at intel.com (Xu, YizhouX) Date: Fri, 5 Jul 2019 01:16:12 +0000 Subject: [Starlingx-discuss] memory reserved for vm not enough In-Reply-To: References: <8E7F30EFCB9B334AAA0491274BEDBE750104D5E2@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD1270@SHSMSX104.ccr.corp.intel.com> Message-ID: <8E7F30EFCB9B334AAA0491274BEDBE750104D772@SHSMSX105.ccr.corp.intel.com> Sorry for late reply, memory has been allocated from hugepage with hw:mem_page_size=2M flavor thanks for all your help. From: Sun, Austin Sent: Wednesday, July 3, 2019 8:50 PM To: Xie, Cindy ; Xu, YizhouX ; 'starlingx-discuss at lists.starlingx.io' ; Yang, Bin Subject: RE: [Starlingx-discuss] memory reserved for vm not enough Hi Yizhou: You can use 'system host-memory-modify' command to adjust. memory config. For example : 1) lock AIO controller-0 2) system host-memory-modify controller-0 -2M -f vswitch. 3) unlock AIO controller-0. When controller-0 is available again. the memory will be adjusted Thanks. BR Austin Sun. From: Xie, Cindy Sent: Wednesday, July 3, 2019 8:37 PM To: Xu, YizhouX >; 'starlingx-discuss at lists.starlingx.io' >; Yang, Bin >; Sun, Austin > Subject: RE: [Starlingx-discuss] memory reserved for vm not enough Wondering if Austin or Bin can provide some insight here. From: Xu, YizhouX [mailto:yizhoux.xu at intel.com] Sent: Wednesday, July 3, 2019 7:40 PM To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] memory reserved for vm not enough Hi all: I'm testing pci-passthrough with a AIO stx node(iso version:20190607T142331Z), after install completed, I found that there's not enough memory reversed for my vms(server total memory is 64G,but only 7-8G can be allocated for vms), Checked node with `system host-memory-list controller-0` and find that hugepages was configured and I did't deply dpdk-ovs. Here are my questions: 1. Did stx turn on hugepage as default? Can this feature be tured off to get all the reserved memory for my vms (I don't need dpdk-ovs)? 2. If hugepage is necessary, refer to https://docs.openstack.org/nova/pike/admin/huge-pages.html ,I've Customized flavor for huge pages allocations with `openstack flavor set m1.large --property hw:mem_page_size=large` But it did't work (vm boot successfully , no memory allocated from reseved hugepages). Did I do it in right way? Where can I trace the error? Here 're the detail of my node: [wrsroot at controller-0 ~(keystone_admin)]$ system host-memory-list controller-0 +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_tota | app_hp | app_hp | app_hp_p | app_hp | app_hp | app_hp_pen | app_hp_use_1G | | | al(MiB) | rm(MiB) | il(MiB) | configured | size(M | total | avail | _reqd | l_4K | _total | _avail | ending_2 | _total | _avail | ding_1G | | | | | | | | iB) | | | | | _2M | _2M | M | _1G | _1G | | | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ | 0 | 51308 | 11000 | 51308 | True | 1024 | 0 | 0 | None | 1789952 | 22158 | 22158 | None | 0 | 0 | None | True | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+----------+--------+--------+----------+--------+--------+------------+---------------+ [wrsroot at controller-0 ~(keystone_admin)]$ cat /proc/meminfo | grep Huge HugePages_Total: 22158 HugePages_Free: 22158 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Best Regards, Xu, YiZhou -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Fri Jul 5 03:29:05 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Fri, 5 Jul 2019 03:29:05 +0000 Subject: [Starlingx-discuss] a question about starlingx error handling behavior In-Reply-To: <210898B96CA058408C55992CCAD98676C101FCC7@ALA-MBD.corp.ad.wrs.com> References: <210898B96CA058408C55992CCAD98676C101F633@ALA-MBD.corp.ad.wrs.com> <210898B96CA058408C55992CCAD98676C101FCC7@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Eric, Thank you very much! Sure, I will monitor the two LPs. Once they are resolved, I will recheck my LP. Thanks. Yi From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Thursday, July 4, 2019 7:49 PM To: Wang, Yi C Cc: 'starlingx-discuss at lists.starlingx.io' ; Qian, Bin Subject: RE: [Starlingx-discuss] a question about starlingx error handling behavior Hi Yi, Seems like your issue is a duplicate of another LP that both myself and one of my colleagues are (currently) working on a fix for. https://bugs.launchpad.net/starlingx/+bug/1815969 The above LP requires the following LP that I'm working on a fix for before it can be delivered. https://bugs.launchpad.net/starlingx/+bug/1835268 Please retest once the above 2 LP's have updates delivered against them. Eric. From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Wednesday, July 03, 2019 11:34 AM To: MacDonald, Eric Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] a question about starlingx error handling behavior Importance: High Hi Eric, I retested it. And here is all the information you requested. I uploaded them to my google drive. Below is the link. https://drive.google.com/open?id=1LiCgPz5iS3SApb0Em8ZyA56lryAU-3Vm I conducted two tests. For case 1, the system can recover. For case 2, it can't. Case 1: pull the active controller management cable for longer than 30s, and then reinsert it Case 2: pull the active controller management cable for less than 30s, and then reinsert it For case 2, since controller-1 was shown as "offline" on controller-0. "collect all" can't get the information of controller-1. So I copied the whole folder "/var/log" of controller-1. It is included in the shared zip package. If you need more information, let me know. Thanks for your help again! Thanks. Yi From: Wang, Yi C Sent: Tuesday, July 2, 2019 9:21 PM To: MacDonald, Eric > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] a question about starlingx error handling behavior Hi Eric, Thank you, Eric! I will collect all the information and get back to you soon. I confirm that I physically pulled the cable. Thanks. Yi From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: Tuesday, July 2, 2019 7:54 PM To: Wang, Yi C > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] a question about starlingx error handling behavior Please perform both cases and for each - indicate what cables/interfaces were pulled - indicate what hosts the cables were pulled from - indicate approximate timestamp of when the cable was pulled - indicate approximate timestamp of when the cable was reinserted - Run 'collect all' and provide me access to the collect tarball Also, just to be sure ... please confirm that you are physically pulling the cable and not just ifdowning the interface. Eric. From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Sunday, June 30, 2019 9:45 PM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] a question about starlingx error handling behavior Importance: High Hi Eric, I am working on the LP 1815513. Based on my tests, if I unplug the cable of active controller for management network for a long time (for example, 30s), and then plug it, the whole system can recover after some reboots. But if I unplug the cable for a short time, and then plug it. The whole system can't recover. I need to lock/unlock controllers manually to bring the system back. So my questions are: 1. Is the behavior acceptable? (recover the system by manual lock/unlock operations) 2. If the answer is no for #1, we need the system to recover automatically. I am not familiar with internal maintenance logics, could you give me some hints? Thanks. Yi -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Fri Jul 5 11:02:14 2019 From: austin.sun at intel.com (Sun, Austin) Date: Fri, 5 Jul 2019 11:02:14 +0000 Subject: [Starlingx-discuss] new PR for staging stx-libvirt Message-ID: Hi integ and staging core reviewer:  I'm working on bug #1834194 [1], after doing analysis and check, This issue is regression in libvirt 4.7.0.  According to [2], reverting commit 192fdaa614e3800255048a8a70c1292ccf18397a is merged in libvirt 4.9.0 to fix such issue.  I have verified this issue after build stx-base, stx-libvirt and stx-nova docker images with this revert. Would you like review new PR [3] and integ [4] changes ? [1] https://bugs.launchpad.net/starlingx/+bug/1834194 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1672620 [3] https://github.com/starlingx-staging/stx-libvirt/pull/6 [4] https://review.opendev.org/#/c/669323/1 Thanks. BR Austin Sun. From Frank.Miller at windriver.com Fri Jul 5 18:57:01 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 5 Jul 2019 18:57:01 +0000 Subject: [Starlingx-discuss] Next Weekly Containerization Meeting to occur on July 15 Message-ID: Just a reminder that we will not hold a weekly meeting on July 8th and the next meeting will occur on July 15th. Frank From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Friday, June 28, 2019 10:08 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Canceled: Weekly Containerization Meeting July 1 & July 8 Please note that we will not be holding a StarlingX containerization meeting on Monday July 1st due to a national holiday nor on Monday July 8th due to vacation. Our next meeting will be held Monday July 15th. Frank Containers Project Lead -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Jul 5 19:32:28 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 5 Jul 2019 19:32:28 +0000 Subject: [Starlingx-discuss] Launchpad Bug Template Message-ID: <151EE31B9FCCA54397A757BC674650F0C1546025@ALA-MBD.corp.ad.wrs.com> Hello all, This is a friendly reminder to use the StarlingX bug template when reporting issues in Launchpad. We have been seeing a lot of new bugs that are not using the template. The template is available on the wiki at: https://wiki.openstack.org/wiki/StarlingX/BugTemplate and also visible in Launchpad when you click "Report a Bug" The template helps the teams screening and investigating issues to get key information to assist in determining the priority of the reported issues. Thanks, Ghada StarlingX Release Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Fri Jul 5 22:40:03 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 5 Jul 2019 22:40:03 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190705 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-05 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Wed Jul 3 13:27:05 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 3 Jul 2019 13:27:05 +0000 Subject: [Starlingx-discuss] [StarlingX] Regardings magnum service on latest stx version In-Reply-To: <317AB83A10F93A4895BD9A9AE0931BE597E14D@MPS-SYMBX03.neusoft.internal> References: <317AB83A10F93A4895BD9A9AE0931BE597E14D@MPS-SYMBX03.neusoft.internal> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC25709B1@ALA-MBD.corp.ad.wrs.com> Magnum is not supported/integrated in stx2.0 Brent From: lilong-neu at neusoft.com [mailto:lilong-neu at neusoft.com] Sent: Wednesday, July 3, 2019 6:30 AM To: starlingx-discuss at lists.starlingx.io Cc: shuicheng.lin at intel.com; hai.tao.wang at intel.com; fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Su Yang ; zhaos at neusoft.com; cindy.xie at intel.com; 张志国 ; zhaos at neusoft.com; wanghejun at neusoft.com Subject: [Starlingx-discuss] [StarlingX] Regardings magnum service on latest stx version Hello StarlingX guys, we have confirmed magnum service at latest stx version many times which derectly failed to run. so we supposed that the new version of starlingx does not support the magnum service and may be replaced by other services. According to the current situation, if magnum is no longer maintained in the new version of starlingx, it is recommended to close the bug: https://bugs.launchpad.net/starlingx/+bug/1820324 Could you give some suggestions? others: we tried to configure magnum in the starlingx (2019/06/26) environment, according to the official website(https://docs.openstack.org/magnum/latest/install/install-rdo.html#top), after installation When the related service is prompted, there is no corresponding download resource. The log is as follows: --------------------- controller-0:~$ sudo yum install openstack-magnum-api openstack-magnum-conductor python-magnumclient Password: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile base | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 No package openstack-magnum-api available. No package openstack-magnum-conductor available. Nothing to do --------------------- ---------------------------------------- Neusoft Corporation Neusoft Group (Dalian) Co., Ltd. No. 901 Huangpu Road, Dalian 116085, PRC Website: www.neusoft.com Mobile: (86) 15840916693 Tel:(86 0411) 8483 2794 E-mail: lilong-neu at neusoft.com --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ronald.Stone at windriver.com Wed Jul 3 14:07:03 2019 From: Ronald.Stone at windriver.com (Stone, Ronald) Date: Wed, 3 Jul 2019 14:07:03 +0000 Subject: [Starlingx-discuss] controller-0 default password Message-ID: <90B8CFEDE03A6549A2DE0880F7B0DF610804CF26@ALA-MBD.corp.ad.wrs.com> Hi, The instructions at https://docs.starlingx.io/deploy_install_guides/latest/aio_duplex/index.html#setting-up-controller-0 indicate that the default user and password on the initial post-install boot are wrsroot. I am unable to log in with these credentials and change the password. Can someone confirm the correct strings? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jul 3 23:11:19 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 3 Jul 2019 23:11:19 +0000 Subject: [Starlingx-discuss] Ceph validation execution progress 96% PASSED, 1 TC BLOCKED by LP1827119/LP1828262. In-Reply-To: <03D458D5BAFF6041973594B00B4E58CE5D6D1063@CRSMSX104.amr.corp.intel.com> References: <03D458D5BAFF6041973594B00B4E58CE5D6D1063@CRSMSX104.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FD2A4D@SHSMSX104.ccr.corp.intel.com> Fernando, Great progress! I am copying this nice report to the mailing list. :) Thx. - cindy From: Hernandez Gonzalez, Fernando Sent: Thursday, July 4, 2019 4:59 AM To: 'Poncea, Ovidiu' ; Xie, Cindy ; Cabrales, Ada ; 'Waheed, Numan' ; Jones, Bruce E ; Chen, Tingjie ; Hu, Yong ; 'Badea, Daniel' ; Chen, Haochuan Z ; Miller, Frank Subject: Ceph validation execution progress 96% PASSED, 1 TC BLOCKED by LP1827119/LP1828262. Hi All, Today I am happy to announce that we are almost done with Ceph validation, there is one single test case (STOR_FS_033) blocked and waiting to be ran, all other are now 96% PASSED. STOR_FS_033 - BLOCKED 2 issues reported related to this functionality, one Gerrit review in process, we will keep it blocked for now https://review.opendev.org/#/c/661900/ https://bugs.launchpad.net/starlingx/+bug/1827119 (In Progress) https://bugs.launchpad.net/starlingx/+bug/1828262 (In Progress) # DOMAIN TEST CASE PRIORITY RESULT 1 Ceph/tier STOR_TIER_005 1 Pass 2 Ceph/tier STOR_TIER_006 1 Pass 3 Ceph/tier STOR_TIER_007 1 Pass 4 Ceph/tier STOR_TIER_008 1 Pass 5 Ceph/factor STOR_REPF_009 2 Pass 6 Ceph/Swiftenabled STOR_SWIFT_010 2 Pass 7 Ceph/cephmon STOR_PROCESS_011 1 Pass 8 Ceph/ceph-osd STOR_PROCESS_012 1 Pass 9 Ceph STOR_SCALABILITY_013 2 Pass 10 Ceph/cephmon STOR_CORE_014 2 Pass 11 Ceph/cephmon STOR_CORE_015 2 Pass 12 Ceph/cephmon STOR_CORE_016 2 Pass 13 Ceph/journals STOR_JOUR_017 2 Pass 14 Ceph/ceph-osd STOR_HW_018 1 Pass 15 Ceph/journals STOR_HW_019 2 Pass 16 Ceph/recovery STOR_DOR_022 2 Pass 17 Ceph/recovery STOR_FAULT_023 1 Pass 18 Ceph/pools STOR_FS_024 2 Deferred 19 Ceph STOR_PROF_029 2 Pass 20 Ceph/partitions STOR_PART_030 1 Pass 21 Ceph/ceph-mon STOR_FS_033 2 Block 22 Ceph/ceph-mgr-api STOR_IOPATH 2 Pass 23 Ceph/pools STOR_POOL 2 Pass 24 Ceph/RESTFUL STOR_RESTFUL 2 Pass 25 Ceph/rbd STOR_RBD 2 Pass 26 Ceph/snapshots STOR_RBD_SNAPSHOT 2 Pass Thanks Fernando Hernandez Gonzalez Cloud Software Engineer Avenida del Bosque #1001 Col, El Bajío Zapopan, Jalisco MX, 45019 ____________________________________ Office: +52.33.16.45.01.34 inet 86450134 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lilong-neu at neusoft.com Thu Jul 4 02:13:48 2019 From: lilong-neu at neusoft.com (lilong-neu) Date: Thu, 4 Jul 2019 10:13:48 +0800 Subject: [Starlingx-discuss] [StarlingX] Regardings magnum service on latest stx version In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EC25709B1@ALA-MBD.corp.ad.wrs.com> References: <317AB83A10F93A4895BD9A9AE0931BE597E14D@MPS-SYMBX03.neusoft.internal> <2588653EBDFFA34B982FAF00F1B4844EC25709B1@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Brent Thank you for your prompt reply. I am very appreciated for you confirmation. it is really pleasure to cooperate with you. Best Regards Long ---------------------------------------- Neusoft Corporation Neusoft Group (Dalian) Co., Ltd. No. 901 Huangpu Road, Dalian 116085, PRC Website: www.neusoft.com Mobile: (86) 15840916693 Tel:(86 0411) 8483 2794 E-mail: lilong-neu at neusoft.com On 2019/7/3 下午9:27, Rowsell, Brent wrote: > > Magnum is not supported/integrated  in stx2.0 > > Brent > > *From:*lilong-neu at neusoft.com [mailto:lilong-neu at neusoft.com] > *Sent:* Wednesday, July 3, 2019 6:30 AM > *To:* starlingx-discuss at lists.starlingx.io > *Cc:* shuicheng.lin at intel.com; hai.tao.wang at intel.com; > fuyong at neusoft.com; lilong-neu at neusoft.com; zuoyl at neusoft.com; Su Yang > ; zhaos at neusoft.com; cindy.xie at intel.com; > 张志国; zhaos at neusoft.com; wanghejun at neusoft.com > *Subject:* [Starlingx-discuss] [StarlingX] Regardings magnum service > on latest stx version > > Hello StarlingX guys, > > we have confirmed magnum service at latest stx version many times > which derectly failed to run. > > so we supposed that the new version of starlingx does not support the > magnum service > > and may be replaced by other services. > > According to the current situation, if magnum is no longer maintained > in the new version of starlingx, > > it is recommended to close the bug: > > https://bugs.launchpad.net/starlingx/+bug/1820324 > > Could you give some suggestions? > > others: > > we tried to configure magnum in the starlingx (2019/06/26) environment, > > according to the official > website(https://docs.openstack.org/magnum/latest/install/install-rdo.html#top), > > > after installation When the related service is prompted, there is no > corresponding download resource. > > The log is as follows: > > --------------------- > > controller-0:~$ sudo yum install openstack-magnum-api > openstack-magnum-conductor python-magnumclient > > Password: > > Loaded plugins: fastestmirror > > Loading mirror speeds from cached hostfile > > base | 3.6 kB 00:00:00 > > extras | 3.4 kB 00:00:00 > > updates | 3.4 kB 00:00:00 > > No package openstack-magnum-api available. > > No package openstack-magnum-conductor available. > > Nothing to do > > --------------------- > > ---------------------------------------- > > Neusoft Corporation > > Neusoft Group (Dalian) Co., Ltd. > > No. 901 Huangpu Road, Dalian 116085, PRC > > Website: www.neusoft.com > > Mobile: (86) 15840916693 > > Tel:(86 0411) 8483 2794 > > E-mail: lilong-neu at neusoft.com > > --------------------------------------------------------------------------------------------------- > Confidentiality Notice: The information contained in this e-mail and > any accompanying attachment(s) > is intended only for the use of the intended recipient and may be > confidential and/or privileged of > Neusoft Corporation, its subsidiaries and/or its affiliates. If any > reader of this communication is > not the intended recipient, unauthorized use, forwarding, printing,  > storing, disclosure or copying > is strictly prohibited, and may be unlawful.If you have received this > communication in error,please > immediately notify the sender by return e-mail, and delete the > original message and all copies from > your system. Thank you. > --------------------------------------------------------------------------------------------------- > --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Thu Jul 4 09:16:04 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Thu, 4 Jul 2019 09:16:04 +0000 Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Message-ID: Hi Team, I have picked StarlingX R2.0 latest Gren Build dated 28th June 2019 from the link below: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/ I am following the below document in order to build AIO Simplex R2.0 https://docs.starlingx.io/deploy_install_guides/latest/aio_simplex/index.html#using-the-system-cli-to-bring-up-and-take-down-the-containerized-services After successfully bootstrapping, I am facing the error in the below command while provisioning the controller [sysadmin at controller-0 ~(keystone_admin)]$ system host-if-modify controller-0 ens3 -c platform --networks oam usage: system [--version] [--debug] [-v] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--ca-file CA_FILE] [--timeout TIMEOUT] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [--system-url SYSTEM_URL] [--system-api-version SYSTEM_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] ... system: error: unrecognized arguments: --networks oam On checking the help command, I found out that there is no "networks" and "datanetworks" flag in the latest green Build * StarlingX 2019.05 28th June 2019 Build No option mentioned in the document is available [root at controller-0 sysadmin(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-a ] [-x ] [-c ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] However, in the previous green build dated 13th June, both the flags were available in the iso. * StarlingX 2019.05 13Th June Build Both options i.e. networks and -d(datanetworks) mentioned in the document are available [root at localhost wrsroot(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-p ] [-d ] [-a ] [-x ] [-c ] [--networks ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] Can you please suggest what are the new parameters that needs to be configured. Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jul 8 00:32:49 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 8 Jul 2019 00:32:49 +0000 Subject: [Starlingx-discuss] controller-0 default password In-Reply-To: <90B8CFEDE03A6549A2DE0880F7B0DF610804CF26@ALA-MBD.corp.ad.wrs.com> References: <90B8CFEDE03A6549A2DE0880F7B0DF610804CF26@ALA-MBD.corp.ad.wrs.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608B2AA7@SHSMSX105.ccr.corp.intel.com> Hi, It has been changed to "sysadmin/sysadmin" for username and password. Best Regards Shuicheng From: Stone, Ronald [mailto:Ronald.Stone at windriver.com] Sent: Wednesday, July 3, 2019 10:07 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] controller-0 default password Hi, The instructions at https://docs.starlingx.io/deploy_install_guides/latest/aio_duplex/index.html#setting-up-controller-0 indicate that the default user and password on the initial post-install boot are wrsroot. I am unable to log in with these credentials and change the password. Can someone confirm the correct strings? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jul 8 00:38:08 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 8 Jul 2019 00:38:08 +0000 Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C76608B2ABC@SHSMSX105.ccr.corp.intel.com> Hi, Please check the new cmd in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/004941.html Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, July 4, 2019 5:16 PM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Team, I have picked StarlingX R2.0 latest Gren Build dated 28th June 2019 from the link below: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/ I am following the below document in order to build AIO Simplex R2.0 https://docs.starlingx.io/deploy_install_guides/latest/aio_simplex/index.html#using-the-system-cli-to-bring-up-and-take-down-the-containerized-services After successfully bootstrapping, I am facing the error in the below command while provisioning the controller [sysadmin at controller-0 ~(keystone_admin)]$ system host-if-modify controller-0 ens3 -c platform --networks oam usage: system [--version] [--debug] [-v] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--ca-file CA_FILE] [--timeout TIMEOUT] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [--system-url SYSTEM_URL] [--system-api-version SYSTEM_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] ... system: error: unrecognized arguments: --networks oam On checking the help command, I found out that there is no "networks" and "datanetworks" flag in the latest green Build * StarlingX 2019.05 28th June 2019 Build No option mentioned in the document is available [root at controller-0 sysadmin(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-a ] [-x ] [-c ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] However, in the previous green build dated 13th June, both the flags were available in the iso. * StarlingX 2019.05 13Th June Build Both options i.e. networks and -d(datanetworks) mentioned in the document are available [root at localhost wrsroot(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-p ] [-d ] [-a ] [-x ] [-c ] [--networks ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] Can you please suggest what are the new parameters that needs to be configured. Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Jul 8 03:52:24 2019 From: yong.hu at intel.com (Yong Hu) Date: Sun, 7 Jul 2019 20:52:24 -0700 Subject: [Starlingx-discuss] =?utf-8?b?562U5aSNOiBbbGlzdHMuc3Rhcmxpbmd4?= =?utf-8?b?Lmlv5Luj5Y+RXVJlOiDnrZTlpI06IEhlbHAgbmVlZGVkIHRvIGFkZCBwYXRj?= =?utf-8?q?h_to_openstackclient?= In-Reply-To: <4e7c694b-461e-bc87-b416-3011f8c8590f@gmail.com> References: <35265f2406eaca448745d61525d8b9ef@sslemail.net> <9738ad90d55f4bb0914d6c4117ca2b0f@inspur.com> <4e7c694b-461e-bc87-b416-3011f8c8590f@gmail.com> Message-ID: as Dean mentioned, there might be other way to resolve the problems you see. Probably you can paste your patch here to community so that guys can come up with some suggestions? On 04/07/2019 8:42 AM, Dean Troyer wrote: > On 7/4/19 10:16 AM, Song Gao song (高松) wrote: >> Yes, it has been fixed, but cannot be directly used because maybe the >> related novaclient or cinderclient project need to upgrade at the same >> time. >> It seems that the only solution is to add patch to the >> starlingx/upstream r2018.10 branch, isn't it? > > If you need to continue with that release, yes you will need to backport > whatever patches you need.  pike is on 'extended maintenance' upstream > so nothing that is not a CVE will get fixed there now. > > If the fix also depends on other client libs you will need to bring > those down too. > > Is this fix required for the internal operations of StarlingX? ie, is it > the puppet or other internal code that needs it, or is it for your > administrative uses that you need it?  If it is for admin use, you can > install OSC 3.19.0 into a virtual env, even on a controller if > necessary, and use it directly. > > dt > From shuicheng.lin at intel.com Mon Jul 8 04:36:01 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 8 Jul 2019 04:36:01 +0000 Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build In-Reply-To: References: <9700A18779F35F49AF027300A49E7C76608B2ABC@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608B2CBC@SHSMSX105.ccr.corp.intel.com> Hi Gupta, The build-pkgs/build-helm-charts is for the STX image build. They are not run in the STX controller. To setup the build system for STX, please have a try with below guide: https://docs.starlingx.io/contributor/build_guides/latest/index.html For the stx-openstack tarball, you could get it here: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ I will choose "stable-latest" version usually. And "helm-charts-stx-openstack-centos-stable-latest.tgz" is the same file as "stx-openstack-1.0-17-centos-stable-latest.tgz" Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Monday, July 8, 2019 12:05 PM To: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Shuicheng Thanks for your response. I am able to assign oam and data networks to my interfaces. I have successfully unlocked my Controller In the section, Generate the stx-openstack application tarball, I am unable to find the tar ball "helm-charts-manifest.tgz" In order to generate the tarball, there is no file "$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh" on my machine Also the command "build-pkgs" to build the packages is not running on my controller. And on the repository mentioned below, there are multiple .tgz files, but none is named "helm-charts-manifest.tgz" http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ Can you please help in resolving this? Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 08 July 2019 06:08 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi, Please check the new cmd in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/004941.html Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, July 4, 2019 5:16 PM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Team, I have picked StarlingX R2.0 latest Gren Build dated 28th June 2019 from the link below: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/ I am following the below document in order to build AIO Simplex R2.0 https://docs.starlingx.io/deploy_install_guides/latest/aio_simplex/index.html#using-the-system-cli-to-bring-up-and-take-down-the-containerized-services After successfully bootstrapping, I am facing the error in the below command while provisioning the controller [sysadmin at controller-0 ~(keystone_admin)]$ system host-if-modify controller-0 ens3 -c platform --networks oam usage: system [--version] [--debug] [-v] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--ca-file CA_FILE] [--timeout TIMEOUT] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [--system-url SYSTEM_URL] [--system-api-version SYSTEM_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] ... system: error: unrecognized arguments: --networks oam On checking the help command, I found out that there is no "networks" and "datanetworks" flag in the latest green Build * StarlingX 2019.05 28th June 2019 Build No option mentioned in the document is available [root at controller-0 sysadmin(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-a ] [-x ] [-c ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] However, in the previous green build dated 13th June, both the flags were available in the iso. * StarlingX 2019.05 13Th June Build Both options i.e. networks and -d(datanetworks) mentioned in the document are available [root at localhost wrsroot(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-p ] [-d ] [-a ] [-x ] [-c ] [--networks ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] Can you please suggest what are the new parameters that needs to be configured. Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jul 8 04:39:32 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 8 Jul 2019 04:39:32 +0000 Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build References: <9700A18779F35F49AF027300A49E7C76608B2ABC@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608B2CFA@SHSMSX105.ccr.corp.intel.com> Hi Gupta, The build-pkgs/build-helm-charts is for the STX image build. They are not run in the STX controller. To setup the build system for STX, please have a try with below guide: https://docs.starlingx.io/contributor/build_guides/latest/index.html For the stx-openstack tarball, you could get it here: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ I will choose "stable-latest" version usually. And "helm-charts-stx-openstack-centos-stable-latest.tgz" is the same file as "stx-openstack-1.0-17-centos-stable-latest.tgz" Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Monday, July 8, 2019 12:05 PM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Shuicheng Thanks for your response. I am able to assign oam and data networks to my interfaces. I have successfully unlocked my Controller In the section, Generate the stx-openstack application tarball, I am unable to find the tar ball "helm-charts-manifest.tgz" In order to generate the tarball, there is no file "$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh" on my machine Also the command "build-pkgs" to build the packages is not running on my controller. And on the repository mentioned below, there are multiple .tgz files, but none is named "helm-charts-manifest.tgz" http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ Can you please help in resolving this? Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 08 July 2019 06:08 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi, Please check the new cmd in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/004941.html Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, July 4, 2019 5:16 PM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Team, I have picked StarlingX R2.0 latest Gren Build dated 28th June 2019 from the link below: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/ I am following the below document in order to build AIO Simplex R2.0 https://docs.starlingx.io/deploy_install_guides/latest/aio_simplex/index.html#using-the-system-cli-to-bring-up-and-take-down-the-containerized-services After successfully bootstrapping, I am facing the error in the below command while provisioning the controller [sysadmin at controller-0 ~(keystone_admin)]$ system host-if-modify controller-0 ens3 -c platform --networks oam usage: system [--version] [--debug] [-v] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--ca-file CA_FILE] [--timeout TIMEOUT] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [--system-url SYSTEM_URL] [--system-api-version SYSTEM_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] ... system: error: unrecognized arguments: --networks oam On checking the help command, I found out that there is no "networks" and "datanetworks" flag in the latest green Build * StarlingX 2019.05 28th June 2019 Build No option mentioned in the document is available [root at controller-0 sysadmin(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-a ] [-x ] [-c ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] However, in the previous green build dated 13th June, both the flags were available in the iso. * StarlingX 2019.05 13Th June Build Both options i.e. networks and -d(datanetworks) mentioned in the document are available [root at localhost wrsroot(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-p ] [-d ] [-a ] [-x ] [-c ] [--networks ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] Can you please suggest what are the new parameters that needs to be configured. Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jul 8 04:54:06 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 8 Jul 2019 04:54:06 +0000 Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build In-Reply-To: References: <9700A18779F35F49AF027300A49E7C76608B2ABC@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608B2CFA@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608B2D3A@SHSMSX105.ccr.corp.intel.com> Hi Gupta, Here is the help of the cmd: [sysadmin at controller-0 ~(keystone_admin)]$ system help application-upload usage: system application-upload [-n ] [-v ] Upload application Helm chart(s) and manifest So it should be "system application-upload -n stx-openstack helm...tgz" Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Monday, July 8, 2019 12:47 PM To: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Shuicheng I have downloaded "helm-charts-stx-openstack-centos-stable-latest.tgz" from the repository and transferred on my Active Controller. When I am trying to "Stage application for deployment", following is the error output [root at controller-0 sysadmin(keystone_admin)]# system application-upload stx-openstack helm-charts-stx-openstack-centos-stable-latest.tgz usage: system [--version] [--debug] [-v] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--ca-file CA_FILE] [--timeout TIMEOUT] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [--system-url SYSTEM_URL] [--system-api-version SYSTEM_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] ... system: error: unrecognized arguments: helm-charts-stx-openstack-centos-stable-latest.tgz I have also tried renaming "helm-charts-stx-openstack-centos-stable-latest.tgz" to "helm-charts-manifest.tgz" and re-run the command, but the error is same. Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 08 July 2019 10:10 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Gupta, The build-pkgs/build-helm-charts is for the STX image build. They are not run in the STX controller. To setup the build system for STX, please have a try with below guide: https://docs.starlingx.io/contributor/build_guides/latest/index.html For the stx-openstack tarball, you could get it here: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ I will choose "stable-latest" version usually. And "helm-charts-stx-openstack-centos-stable-latest.tgz" is the same file as "stx-openstack-1.0-17-centos-stable-latest.tgz" Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Monday, July 8, 2019 12:05 PM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Shuicheng Thanks for your response. I am able to assign oam and data networks to my interfaces. I have successfully unlocked my Controller In the section, Generate the stx-openstack application tarball, I am unable to find the tar ball "helm-charts-manifest.tgz" In order to generate the tarball, there is no file "$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh" on my machine Also the command "build-pkgs" to build the packages is not running on my controller. And on the repository mentioned below, there are multiple .tgz files, but none is named "helm-charts-manifest.tgz" http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ Can you please help in resolving this? Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 08 July 2019 06:08 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi, Please check the new cmd in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/004941.html Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, July 4, 2019 5:16 PM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Team, I have picked StarlingX R2.0 latest Gren Build dated 28th June 2019 from the link below: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/ I am following the below document in order to build AIO Simplex R2.0 https://docs.starlingx.io/deploy_install_guides/latest/aio_simplex/index.html#using-the-system-cli-to-bring-up-and-take-down-the-containerized-services After successfully bootstrapping, I am facing the error in the below command while provisioning the controller [sysadmin at controller-0 ~(keystone_admin)]$ system host-if-modify controller-0 ens3 -c platform --networks oam usage: system [--version] [--debug] [-v] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--ca-file CA_FILE] [--timeout TIMEOUT] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [--system-url SYSTEM_URL] [--system-api-version SYSTEM_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] ... system: error: unrecognized arguments: --networks oam On checking the help command, I found out that there is no "networks" and "datanetworks" flag in the latest green Build * StarlingX 2019.05 28th June 2019 Build No option mentioned in the document is available [root at controller-0 sysadmin(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-a ] [-x ] [-c ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] However, in the previous green build dated 13th June, both the flags were available in the iso. * StarlingX 2019.05 13Th June Build Both options i.e. networks and -d(datanetworks) mentioned in the document are available [root at localhost wrsroot(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-p ] [-d ] [-a ] [-x ] [-c ] [--networks ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] Can you please suggest what are the new parameters that needs to be configured. Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Mon Jul 8 06:21:59 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Mon, 8 Jul 2019 14:21:59 +0800 Subject: [Starlingx-discuss] Question about All-in-one Duplex on STX R1.0 Message-ID: Dear all, Install all-in-one Duplex: https://docs.starlingx.io/deployment_guides/current/duplex.html 1. What's the requirement environment of controller-1 ? 2. How does the controller-0 could find the controller-1? 3. Do the controller-0 and the controller-1 should be the same HW Spec. ? 4. From the document, it shows that * In Controller-1 console you will see: Waiting for this node to be configured. Please configure the personality for this node from the controller node in order to proceed.* Does the controller-1 need a USB in a bootable USB slot or anything else? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong.lc at inspur.com Mon Jul 8 06:28:37 2019 From: gaosong.lc at inspur.com (=?utf-8?B?U29uZyBHYW8gc29uZyAo6auY5p2+KQ==?=) Date: Mon, 8 Jul 2019 06:28:37 +0000 Subject: [Starlingx-discuss] =?utf-8?b?562U5aSNOiBbbGlzdHMuc3Rhcmxpbmd4?= =?utf-8?b?Lmlv5Luj5Y+RXVJlOiAg562U5aSNOiBbbGlzdHMuc3Rhcmxpbmd4Lmlv5Luj?= =?utf-8?b?5Y+RXVJlOiDnrZTlpI06IEhlbHAgbmVlZGVkIHRvIGFkZCBwYXRjaCB0byBv?= =?utf-8?q?penstackclient?= In-Reply-To: References: <35265f2406eaca448745d61525d8b9ef@sslemail.net> <9738ad90d55f4bb0914d6c4117ca2b0f@inspur.com> <4e7c694b-461e-bc87-b416-3011f8c8590f@gmail.com> Message-ID: <96b983afce1c4bfd999c906601038e4f@inspur.com> It's an error in openstackclient/volume/v2/backup.py, line 322, the return value of the "take_action" is an object which is incorrect for the command line return info. I just modify it to the value of _info() > -----邮件原件----- > 发件人: Yong Hu [mailto:yong.hu at intel.com] > 发送时间: 2019年7月8日 11:52 > 收件人: starlingx-discuss at lists.starlingx.io > 主题: [lists.starlingx.io代发]Re: [Starlingx-discuss] 答复: [lists.starlingx.io代 > 发]Re: 答复: Help needed to add patch to openstackclient > > as Dean mentioned, there might be other way to resolve the problems you see. > Probably you can paste your patch here to community so that guys can come > up with some suggestions? > > > On 04/07/2019 8:42 AM, Dean Troyer wrote: > > On 7/4/19 10:16 AM, Song Gao song (高松) wrote: > >> Yes, it has been fixed, but cannot be directly used because maybe the > >> related novaclient or cinderclient project need to upgrade at the > >> same time. > >> It seems that the only solution is to add patch to the > >> starlingx/upstream r2018.10 branch, isn't it? > > > > If you need to continue with that release, yes you will need to > > backport whatever patches you need. pike is on 'extended maintenance' > > upstream so nothing that is not a CVE will get fixed there now. > > > > If the fix also depends on other client libs you will need to bring > > those down too. > > > > Is this fix required for the internal operations of StarlingX? ie, is > > it the puppet or other internal code that needs it, or is it for your > > administrative uses that you need it? If it is for admin use, you can > > install OSC 3.19.0 into a virtual env, even on a controller if > > necessary, and use it directly. > > > > dt > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3603 bytes Desc: not available URL: From austin.sun at intel.com Mon Jul 8 07:04:32 2019 From: austin.sun at intel.com (Sun, Austin) Date: Mon, 8 Jul 2019 07:04:32 +0000 Subject: [Starlingx-discuss] Question about All-in-one Duplex on STX R1.0 In-Reply-To: References: Message-ID: Hi Ezpeer: Please see comments in line. From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Monday, July 8, 2019 2:22 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Question about All-in-one Duplex on STX R1.0 Dear all, Install all-in-one Duplex: https://docs.starlingx.io/deployment_guides/current/duplex.html 1. What's the requirement environment of controller-1 ? [Austin] same as controller-0 2. How does the controller-0 could find the controller-1? [Austin] controller-1 should be in same mgmt network and in OAM network 3. Do the controller-0 and the controller-1 should be the same HW Spec. ? [Austin] for duplex, it is better to be same. 4. From the document, it shows that In Controller-1 console you will see: Waiting for this node to be configured. Please configure the personality for this node from the controller node in order to proceed. Does the controller-1 need a USB in a bootable USB slot or anything else? [Austin] No. controller-1 will be installed via pxe install, Please check : https://docs.starlingx.io/deployment_guides/current/duplex.html#updating-controller-1-host-hostname-and-personality and https://docs.starlingx.io/deployment_guides/current/duplex.html#id8 Thanks. BR Austin Sun. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From yizhoux.xu at intel.com Mon Jul 8 10:44:15 2019 From: yizhoux.xu at intel.com (Xu, YizhouX) Date: Mon, 8 Jul 2019 10:44:15 +0000 Subject: [Starlingx-discuss] vcpus didn't recognize correctly after turning on hyper-threading Message-ID: <8E7F30EFCB9B334AAA0491274BEDBE750104D994@SHSMSX105.ccr.corp.intel.com> Hi all: I'm testing with a AIO node(iso version:20190607T142331Z). To meet increasing vcpu requirments, I turn on hyper-threading(ht) in BIOS configuration. Now the cpu num of system have been twice as much as before(12), but the num of vcpu did not changed(still 4). Relating to problem: 1. The num of cpu is correct in libvirtd container, check it with 'kube exec -it pod_id lscpu' 2. Checking cpu with `system host-cpu-list` ,and seems it had allocated new added cpu to applications and platform separately correct. [wrsroot at controller-0 ~(keystone_admin)]$ system host-cpu-list controller-0 +--------------------------------------+-------+-----------+-------+--------+------------------------------------------+-------------------+ | uuid | log_c | processor | phy_c | thread | processor_model | assigned_function | | | ore | | ore | | | | +--------------------------------------+-------+-----------+-------+--------+------------------------------------------+-------------------+ | e7e806d7-10c2-44bd-9577-fef092b7bd75 | 0 | 0 | 0 | 0 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Platform | | 10ba7e55-742a-400d-8e41-15b5e7632c9f | 1 | 0 | 1 | 0 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Platform | | 075e8684-43b0-43e9-b8cd-2cbcc6cf06d9 | 2 | 0 | 2 | 0 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Shared | | 44c91151-5d32-4510-9003-6466fb9416a4 | 3 | 0 | 3 | 0 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Applications | | b44109e1-fa68-49dd-9d0c-317f4e9dd657 | 4 | 0 | 4 | 0 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Applications | | dc415b71-181b-4af4-886a-1a6361de027b | 5 | 0 | 5 | 0 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Applications | | eb176953-6675-4dbc-88ea-2b5776fea851 | 6 | 0 | 0 | 1 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Platform | | 8494b17a-bc06-4400-b4cb-55413d05d653 | 7 | 0 | 1 | 1 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Platform | | 32dacf3f-1248-45ea-a122-a4925604ab42 | 8 | 0 | 2 | 1 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Shared | | 966d3b34-7a61-4b6b-9c46-b30ece4b27b1 | 9 | 0 | 3 | 1 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Applications | | c6f57b41-5a41-4536-9221-b111511306ad | 10 | 0 | 4 | 1 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Applications | | ffd2eb6f-7ebe-4ce6-95b0-066d6ab47f56 | 11 | 0 | 5 | 1 | Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz | Applications | +--------------------------------------+-------+-----------+-------+--------+------------------------------------------+-------------------+ 3. Item "cpu_allocation_ratio" in nova.conf ' is 16,seem it not works Best Regards, Xu, YiZhou -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Mon Jul 8 14:08:05 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 8 Jul 2019 14:08:05 +0000 Subject: [Starlingx-discuss] About configfile patch for redfishtool Message-ID: <93814834B4855241994F290E959305C7530AD92E@SHSMSX104.ccr.corp.intel.com> Hi all, Let's discuss this topic here. Your comments are welcome! Thanks! Zhipeng -----Original Message----- From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: 2019年7月8日 19:41 To: Saul Wold ; Liu, ZhipengS Cc: Hu, Yong ; Rowsell, Brent ; Eslimi, Dariush ; Khalil, Ghada ; Xie, Cindy Subject: RE: About configfile patch Saul, Very good points and suggestion. Thank you. Zhipeng, Can you put this out to the general starling-x discussion list as well as Redfish discussion list and keep us informed as to how the redfish community is reacting to the change request. Eric. > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Sunday, July 07, 2019 6:42 PM > To: Liu, ZhipengS; MacDonald, Eric > Cc: Hu, Yong > Subject: Re: About configfile patch > Importance: High > > > Hi Zihipeng, Eric: > > I would like to see this move to the general discuss list, I think > it's appropriate for everyone to understand what's going. Thanks for > getting the patch proposed to upstream Redfish. > > I am concerned first with the technical debt and making sure that the > Redfish upstream community is aware of what we are proposing / doing > in StarlingX. I had another look at this and I now have a better idea > of why it kept being a concern. > > 1) processing the config file itself inside of options processing is > not generally a good idea. It does not allow for easy parsing and extension > of the config file's contents. > > 2) I see you using json, this is good, thanks for proposing it to the > Redfish community, they might have an idea to use a different format > for the contents of the config file. This gets the json idea out there > now rather than finding out in 6 month they they decided to use a > different format. > > 3) As I have mentioned before having plain text passwords is never my > favorite way to go, but since we are already down that path with IPMI, > let's keep going, again maybe the RedFish community had thought about > this or this patch proposal will force that discussion. > > My sunday afternoon thoughts. > > Sau! > > > On 7/4/19 7:24 PM, Liu, ZhipengS wrote: > > +Saul and Yong, > > > > Hi Saul, > > > > Below email thread may give you some clarification about your concern. > > > > Zhipeng > > > > *From:* Liu, ZhipengS > > *Sent:* 2019年7月5日 10:20 > > *To:* 'MacDonald, Eric' > > *Subject:* RE: About configfile patch > > > > I can see password through > > > > ps  –n > > > > Thanks! > > > > Zhipeng > > > > *From:* Liu, ZhipengS > > *Sent:* 2019年7月5日 10:01 > > *To:* 'MacDonald, Eric' > > > > *Subject:* RE: About configfile patch > > > > Hi Eric, > > > > Thanks for your clarification! > > > > BTW, how to use process listing, could you give me an example? J > > > > Zhipeng > > > > *From:*MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > > *Sent:* 2019年7月4日20:26 > > *To:* Liu, ZhipengS > > > > *Subject:* RE: About configfile patch > > > > Hi Zhipeng, > > > > See below. > > > > Is Saul’s concern the technical debt of the config patch or the pw > > file in general. Seems the former. > > > > What can  do, should I speak with him ? > > > > Eric. > > > > *From:*Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > *Sent:* Thursday, July 04, 2019 4:30 AM > > *To:* MacDonald, Eric > > *Subject:* About configfile patch > > *Importance:* High > > > > Hi Eric, > > > > For configfile patch, Saul still have some concern about it and why > > we use a password file > > > > Anyway, I have submitted patch to upstream. > > > > https://github.com/DMTF/Redfishtool/pull/67 > > > > From code, I can see that MTC get bmc_pw through keyring. > > > > */[... Eric ] or barbican now, yes./* > > > > Then we pass the bmc_pw through extra_info to ipmi command thread > > > > */[... Eric ] Yes/* > > > > “The current implementation using IPMITOOL puts the BMC password > > into a short lived root privilege temp file so that it does not show > > up in a process listing.” > > > > Why we have to use temp file instead of showing up in a process > > listing?    The password can be got through process listing?  Not > > clear about this point. > > > > */[... Eric ] If we use the –P option when invoking ipmitool > > then while that command is active and someone does a process listing > > then they can see the –P on the process listing. This is a > > security issue because a non-root user can learn the BMC password > > for any host by just doing a process listing on the active > > controller./* > > > > Could you give me more detail information, thanks! > > > > From below code, it seems we have comment related code.  Does it > > means the file may not be removed right away even with the file open. > > > > So, still not sure which one is much more safe. > > > > */[... Eric ] The temp file is removed in the thread after execution > > completion or timeout./* > > > > */Example code taken from mtcThreads.cpp/* > > > > *//* > > > > *//* > > > > */There is also a garbage collection cleanup audit that ensures > > these temp files do not linger due to ‘say’ a process restart during > > command > > execution./* > > > > */[... Eric ] /**//* > > > > *//* > > > > * > > > > * TODO: fix or figure out why the unlink removes the file right away > > even > > > > *       with the file open. > > > > * > > > > ******************************************************************** > > *********/ > > > > */[... Eric ] The above comment was added simply because when I was > > coding I didn’t understand why the unlink removes the file right > > away./* > > > > */I think now that it was because the file was not open at the time > > the unlink was executed./* > > > > *//* > > > > */In any case the tmp pw file is still removed with redundancy./* > > > > int hostUtil_mktmpfile ( string hostname, string basename, string & > > filename, string data ) > > > > { > > > >     // buffer to hold the temporary file name > > > >     char tempBuff[MAX_FILENAME_LEN]; > > > >     int fd = -1; > > > >     memset(tempBuff,0,sizeof(tempBuff)); > > > >     if ( basename.empty() || data.empty() ) > > > >     { > > > >         slog ("%s called with one or more bad parameters > > (%d:%d)\n", > > > >                   hostname.c_str(), basename.empty(), > > data.empty()); > > > >         return (0); > > > >     } > > > >     /* add what mkstemp will make unique */ > > > >     basename.append("XXXXXX"); > > > >     // Copy the relevant information in the buffers > > > >     snprintf ( &tempBuff[0], MAX_FILENAME_LEN, "%s", > > basename.data()); > > > >     // Create the temporary file, this function will > > > >     // replace the 'X's with random letters > > > >     fd = mkstemp(tempBuff); > > > > // Call unlink so that whenever the file is closed or the program > > exits > > > >     // the temporary file is deleted. > > > >     // > > > >     // Note: Unlinking removes the file immediately. > > > >     // Commenting out. Caller must remove file. > > > >     // > > > >     // unlink(tempBuff); > > > > Thanks! > > > > Zhipeng > > > > *From:* MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > > *Sent:* 2019年7月2日 19:21 > > *To:* Liu, ZhipengS > > > > *Subject:* WolfPass Sensors > > > > Hi Zhipeng, > > > > I've been upgrading the firmware on our set of WolfPass servers. > > > > Even with the upgrade I’ve been having a hard time reading ther > > server sensors through redfish. > > > > Can you send me the command(s) you use and output you see for/when > > dumping the sensors on your WolfPass server ? > > > > I use the following commands on the supermicro but it seems that the > > wolfpass servers don't support this method. > > > > redfishtool -r -u -p Chassis > > Thermal > > > > redfishtool -r -u -p Chassis > > Power > > > > Here are the firmware versions I have. I wonder if it’s my SDR version. > > What is yours ? > > > > *WolfPass* > > > > > > > > *BMC FW* > > > > > > > > *ME* > > > > > > > > *SDR* > > > > > > > > *Redfish Version* > > > > WolfPass 1 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.340 > > > > > > > > 1.04 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 2 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.340 > > > > > > > > 1.04 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 3 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 4 > > > > > > > > 1.29.7d703f59 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > > No Redfish Support > > > > WolfPass 5 > > > > > > > > 1.29.7d703f59 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > > No Redfish Support > > > > WolfPass 6 > > > > > > > > 1.29.7d703f59 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > > No Redfish Support > > > > WolfPass 7 > > > > > > > > 1.29.7d703f59 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > > No Redfish Support > > > > WolfPass 8 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 9 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 10 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 11 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 12 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.340 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 13 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 14 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 15 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.340 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 16 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 17 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > Cheers, > > > > Eric MacDonald, MTS, Engineering, Wind River > > > > direct 613.963.1387  fax: 613.492.7870  skype: eric.r.macdonald > > > > 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 > > From abraham.arce.moreno at intel.com Mon Jul 8 16:23:59 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Mon, 8 Jul 2019 16:23:59 +0000 Subject: [Starlingx-discuss] [Bug] Instance Creating Via Horizon, Help Needed Message-ID: Hi, I am working with the bug 1829925 "instance creating via horizon failed Edit" [0] and I need someone's help to confirm an induced behavior can be replicated in another StarlingX deployment, can you please run the script provided in any of your StarlingX deployments? Steps to Reproduce 1. StarlingX: Two Node System Deployed, Provisioned, Healthy 2. Instance Image: CentOS-7-x86_64-GenericCloud-1905.qcow2 3. Execute script openstack_instance.sh [1] In summary these are the high level errors I am seeing after more than 200 cycles of a single instance being created and removed: - Unable to establish connection to http://nova-api-proxy.openstack.svc.cluster.local:8774 - Failed to discover available identity versions when contacting http://keystone.openstack.svc.cluster.local/v3 - controller-1 was set to active, controller-0 was set to Standby - Platform Memory threshold exceeded ; threshold 80.00%, actual 89.23% - Instance centos owned by admin has failed to schedule For the complete history without a debug strategy yet, please see [2]. [0] https://bugs.launchpad.net/starlingx/+bug/1829925 [1] https://gist.github.com/xe1gyq/0757b9220051aa92569292de7252e0cb [2] https://github.com/xe1gyq/starlingx/blob/master/bugs/1829925_a.md From yong.hu at intel.com Mon Jul 8 17:53:12 2019 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 8 Jul 2019 17:53:12 +0000 Subject: [Starlingx-discuss] call for review patches on starlingx/config Message-ID: Hi core reviewers: Please support and review these patches which intend to fix gating issues for stx.2.0. https://review.opendev.org/#/c/657535 project: starlingx/config https://review.opendev.org/#/c/667329/ project: starlingx/config Regards, Yong From Anirudh.Gupta at hsc.com Mon Jul 8 04:05:20 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Mon, 8 Jul 2019 04:05:20 +0000 Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build In-Reply-To: <9700A18779F35F49AF027300A49E7C76608B2ABC@SHSMSX105.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C76608B2ABC@SHSMSX105.ccr.corp.intel.com> Message-ID: Hi Shuicheng Thanks for your response. I am able to assign oam and data networks to my interfaces. I have successfully unlocked my Controller In the section, Generate the stx-openstack application tarball, I am unable to find the tar ball "helm-charts-manifest.tgz" In order to generate the tarball, there is no file "$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh" on my machine Also the command "build-pkgs" to build the packages is not running on my controller. And on the repository mentioned below, there are multiple .tgz files, but none is named "helm-charts-manifest.tgz" http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ Can you please help in resolving this? Regards Anirudh Gupta From: Lin, Shuicheng Sent: 08 July 2019 06:08 To: Anirudh Gupta ; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi, Please check the new cmd in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/004941.html Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, July 4, 2019 5:16 PM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Team, I have picked StarlingX R2.0 latest Gren Build dated 28th June 2019 from the link below: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/ I am following the below document in order to build AIO Simplex R2.0 https://docs.starlingx.io/deploy_install_guides/latest/aio_simplex/index.html#using-the-system-cli-to-bring-up-and-take-down-the-containerized-services After successfully bootstrapping, I am facing the error in the below command while provisioning the controller [sysadmin at controller-0 ~(keystone_admin)]$ system host-if-modify controller-0 ens3 -c platform --networks oam usage: system [--version] [--debug] [-v] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--ca-file CA_FILE] [--timeout TIMEOUT] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [--system-url SYSTEM_URL] [--system-api-version SYSTEM_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] ... system: error: unrecognized arguments: --networks oam On checking the help command, I found out that there is no "networks" and "datanetworks" flag in the latest green Build * StarlingX 2019.05 28th June 2019 Build No option mentioned in the document is available [root at controller-0 sysadmin(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-a ] [-x ] [-c ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] However, in the previous green build dated 13th June, both the flags were available in the iso. * StarlingX 2019.05 13Th June Build Both options i.e. networks and -d(datanetworks) mentioned in the document are available [root at localhost wrsroot(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-p ] [-d ] [-a ] [-x ] [-c ] [--networks ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] Can you please suggest what are the new parameters that needs to be configured. Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Mon Jul 8 04:46:58 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Mon, 8 Jul 2019 04:46:58 +0000 Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build In-Reply-To: <9700A18779F35F49AF027300A49E7C76608B2CFA@SHSMSX105.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C76608B2ABC@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608B2CFA@SHSMSX105.ccr.corp.intel.com> Message-ID: Hi Shuicheng I have downloaded "helm-charts-stx-openstack-centos-stable-latest.tgz" from the repository and transferred on my Active Controller. When I am trying to "Stage application for deployment", following is the error output [root at controller-0 sysadmin(keystone_admin)]# system application-upload stx-openstack helm-charts-stx-openstack-centos-stable-latest.tgz usage: system [--version] [--debug] [-v] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--ca-file CA_FILE] [--timeout TIMEOUT] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [--system-url SYSTEM_URL] [--system-api-version SYSTEM_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] ... system: error: unrecognized arguments: helm-charts-stx-openstack-centos-stable-latest.tgz I have also tried renaming "helm-charts-stx-openstack-centos-stable-latest.tgz" to "helm-charts-manifest.tgz" and re-run the command, but the error is same. Regards Anirudh Gupta From: Lin, Shuicheng Sent: 08 July 2019 10:10 To: Anirudh Gupta ; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Gupta, The build-pkgs/build-helm-charts is for the STX image build. They are not run in the STX controller. To setup the build system for STX, please have a try with below guide: https://docs.starlingx.io/contributor/build_guides/latest/index.html For the stx-openstack tarball, you could get it here: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ I will choose "stable-latest" version usually. And "helm-charts-stx-openstack-centos-stable-latest.tgz" is the same file as "stx-openstack-1.0-17-centos-stable-latest.tgz" Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Monday, July 8, 2019 12:05 PM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Shuicheng Thanks for your response. I am able to assign oam and data networks to my interfaces. I have successfully unlocked my Controller In the section, Generate the stx-openstack application tarball, I am unable to find the tar ball "helm-charts-manifest.tgz" In order to generate the tarball, there is no file "$MY_REPO_ROOT_DIR/cgcs-root/build-tools/build-helm-charts.sh" on my machine Also the command "build-pkgs" to build the packages is not running on my controller. And on the repository mentioned below, there are multiple .tgz files, but none is named "helm-charts-manifest.tgz" http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/outputs/helm-charts/ Can you please help in resolving this? Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 08 July 2019 06:08 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi, Please check the new cmd in below mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-June/004941.html Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Thursday, July 4, 2019 5:16 PM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] Unable to set OAM interface in R2.0 28th June 2019 Green Build Hi Team, I have picked StarlingX R2.0 latest Gren Build dated 28th June 2019 from the link below: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/ I am following the below document in order to build AIO Simplex R2.0 https://docs.starlingx.io/deploy_install_guides/latest/aio_simplex/index.html#using-the-system-cli-to-bring-up-and-take-down-the-containerized-services After successfully bootstrapping, I am facing the error in the below command while provisioning the controller [sysadmin at controller-0 ~(keystone_admin)]$ system host-if-modify controller-0 ens3 -c platform --networks oam usage: system [--version] [--debug] [-v] [-k] [--cert-file CERT_FILE] [--key-file KEY_FILE] [--ca-file CA_FILE] [--timeout TIMEOUT] [--os-username OS_USERNAME] [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID] [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL] [--os-region-name OS_REGION_NAME] [--os-auth-token OS_AUTH_TOKEN] [--system-url SYSTEM_URL] [--system-api-version SYSTEM_API_VERSION] [--os-service-type OS_SERVICE_TYPE] [--os-endpoint-type OS_ENDPOINT_TYPE] [--os-user-domain-id OS_USER_DOMAIN_ID] [--os-user-domain-name OS_USER_DOMAIN_NAME] [--os-project-id OS_PROJECT_ID] [--os-project-name OS_PROJECT_NAME] [--os-project-domain-id OS_PROJECT_DOMAIN_ID] [--os-project-domain-name OS_PROJECT_DOMAIN_NAME] ... system: error: unrecognized arguments: --networks oam On checking the help command, I found out that there is no "networks" and "datanetworks" flag in the latest green Build * StarlingX 2019.05 28th June 2019 Build No option mentioned in the document is available [root at controller-0 sysadmin(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-a ] [-x ] [-c ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] However, in the previous green build dated 13th June, both the flags were available in the iso. * StarlingX 2019.05 13Th June Build Both options i.e. networks and -d(datanetworks) mentioned in the document are available [root at localhost wrsroot(keystone_admin)]# system host-if-modify help usage: system host-if-modify [-n ] [-m ] [-p ] [-d ] [-a ] [-x ] [-c ] [--networks ] [--ipv4-mode ] [--ipv6-mode ] [--ipv4-pool ] [--ipv6-pool ] [-N ] [--vf-driver ] Can you please suggest what are the new parameters that needs to be configured. Regards Anirudh Gupta (Senior Engineer) DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Mon Jul 8 21:20:47 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Mon, 8 Jul 2019 21:20:47 +0000 Subject: [Starlingx-discuss] [containers] How can I do a pull of Docker images with keystone Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2B35BE72@FMSMSX125.amr.corp.intel.com> Hi Jerry I was trying to do a pull of an image and according to this patch [1] I'm required a Keystone token to get access to the registry. Are you able to explain what are the steps that I need to do to pull of an image on the local registry using correct keystone authentication. ========== sudo docker -D pull registry.local:9001/docker.io/kolla/ubuntu-source-nova-novncproxy Using default tag: latest Error response from daemon: Get https://registry.local:9001/v2/docker.io/kolla/ubuntu-source-nova-novncproxy/manifests/latest: unauthorized: authentication required =========== 1. https://review.opendev.org/#/c/626355/ Regards, José From maria.g.perez.ibarra at intel.com Mon Jul 8 22:32:56 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 8 Jul 2019 22:32:56 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190708 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-08 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Tue Jul 9 03:31:33 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Tue, 9 Jul 2019 03:31:33 +0000 Subject: [Starlingx-discuss] Nova / Cinder features support questions. Message-ID: Hello StarlingXers !, Currently, we are executing some System Tests and I would like to ask you if some of those ones are still valid for our current containers version of StarlingX. * Are we still supporting Nova / Cinder purges ? if yes, how this works under the containers world ? * Our latest StarlingX builds have support for "soft deletion" ? If yes, how does it works under the containers world? Thanks in advance -Ricardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Kozyrev at windriver.com Tue Jul 9 12:01:49 2019 From: Alex.Kozyrev at windriver.com (Kozyrev, Alexander (Alex)) Date: Tue, 9 Jul 2019 12:01:49 +0000 Subject: [Starlingx-discuss] EdgeX deeper integration? In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007A82808@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BD0778EFA@fmsmsx123.amr.corp.intel.com> <8b2eee427bc937014881a13cb45414bfb8c19443.camel@intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A82808@ALA-MBD.corp.ad.wrs.com> Message-ID: StarlingX Akraino blueprint is focused on validation of EdgeX Foundry services running on StralingX far edge cluster by providing an integration CI/CD pipeline. And we definitely need some improvements in our current implementation. As you can see from our test_kube_edgex_services.py test script I used not really mature open-source project for EdgeX on Kubernetes [0] Creation of official EdgeX application would benefit to all involved projects. [0] https://github.com/rohitsardesai83/edgex-on-kubernetes -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Friday, 28 June, 2019 09:02 To: Cordoba Malibran, Erich; Jones, Bruce E; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] EdgeX deeper integration? There's definitely interest from the Akraino/StarlingX perspective. For those that aren't aware, the EdgeX Foundry application is part of Akraino's StarlingX blueprint [0]. As plans unfold for the next release of Akraino, we will be happy to have contributions in this area. [0] https://wiki.akraino.org/display/AK/StarlingX+Far+Edge+Distributed+Cloud -----Original Message----- From: Cordoba Malibran, Erich Sent: Thursday, June 27, 2019 5:01 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] EdgeX deeper integration? Actually, in the ongoing review for the pytest framework there is this setup of EdgeX in k8s. https://review.opendev.org/#/c/665419/3/automated-pytest-suite/testcases/functional/z_containers/test_kube_edgex_services.py This could be a good start point to create an EdgeX application. -Erich On Thu, 2019-06-27 at 19:54 +0000, Jones, Bruce E wrote: > We had an internal discussion today about EdgeX. We are seeing signs > of it increasing in use and importance in the Edge ecosystem. > > It is fairly straightforward to build and run an EdgeX application > under StarlingX today. We had it running in the Intel booth at the > Denver Summit. > > My question for the community is this: Is there value or interest in > making EdgeX apps even easier to run within StarlingX? For example, > we could create an EdgeX application in StarlingX and allow users to > apply it to the system, to allow the EdgeX services to run and be > managed by StarlingX. This would add some ease of use benefits for > EdgeX users while also putting us in the position of maintaining an up > to date version of EdgeX. > > Is this something we should work on as a community? > > brucej > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Tue Jul 9 12:50:51 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 9 Jul 2019 12:50:51 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/10 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FD9921@SHSMSX104.ccr.corp.intel.com> Agenda for 7/10 meeting: - Ceph test status update (Abraham/Fernando) - skip due to the last blocking test; - stx 2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin, Alex/Bin) - Python2to3 plan review (Austin) - Opens (all) Please add topics if you'd like to bring to the sub-project for discussion. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 10, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From yong.hu at intel.com Tue Jul 9 15:52:36 2019 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 9 Jul 2019 15:52:36 +0000 Subject: [Starlingx-discuss] EdgeX deeper integration? In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD0778EFA@fmsmsx123.amr.corp.intel.com> <8b2eee427bc937014881a13cb45414bfb8c19443.camel@intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A82808@ALA-MBD.corp.ad.wrs.com> Message-ID: <852BD152-9C30-4F3A-8250-0134D99B1B83@intel.com> Alex, About "official EdgeX application", did you mean an Armada application with helm charts in the context of StarlingX? If so, I think it's worth making it as an "optional" application and sysadmin can install/apply it when needed. On 09/07/2019, 5:04 AM, "Kozyrev, Alexander (Alex)" wrote: StarlingX Akraino blueprint is focused on validation of EdgeX Foundry services running on StralingX far edge cluster by providing an integration CI/CD pipeline. And we definitely need some improvements in our current implementation. As you can see from our test_kube_edgex_services.py test script I used not really mature open-source project for EdgeX on Kubernetes [0] Creation of official EdgeX application would benefit to all involved projects. [0] https://github.com/rohitsardesai83/edgex-on-kubernetes -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Friday, 28 June, 2019 09:02 To: Cordoba Malibran, Erich; Jones, Bruce E; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] EdgeX deeper integration? There's definitely interest from the Akraino/StarlingX perspective. For those that aren't aware, the EdgeX Foundry application is part of Akraino's StarlingX blueprint [0]. As plans unfold for the next release of Akraino, we will be happy to have contributions in this area. [0] https://wiki.akraino.org/display/AK/StarlingX+Far+Edge+Distributed+Cloud -----Original Message----- From: Cordoba Malibran, Erich Sent: Thursday, June 27, 2019 5:01 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] EdgeX deeper integration? Actually, in the ongoing review for the pytest framework there is this setup of EdgeX in k8s. https://review.opendev.org/#/c/665419/3/automated-pytest-suite/testcases/functional/z_containers/test_kube_edgex_services.py This could be a good start point to create an EdgeX application. -Erich On Thu, 2019-06-27 at 19:54 +0000, Jones, Bruce E wrote: > We had an internal discussion today about EdgeX. We are seeing signs > of it increasing in use and importance in the Edge ecosystem. > > It is fairly straightforward to build and run an EdgeX application > under StarlingX today. We had it running in the Intel booth at the > Denver Summit. > > My question for the community is this: Is there value or interest in > making EdgeX apps even easier to run within StarlingX? For example, > we could create an EdgeX application in StarlingX and allow users to > apply it to the system, to allow the EdgeX services to run and be > managed by StarlingX. This would add some ease of use benefits for > EdgeX users while also putting us in the position of maintaining an up > to date version of EdgeX. > > Is this something we should work on as a community? > > brucej > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Tue Jul 9 16:15:26 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 9 Jul 2019 16:15:26 +0000 Subject: [Starlingx-discuss] Community Call (July 10, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A869A4@ALA-MBD.corp.ad.wrs.com> Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - defect trend / gating launchpads - defer non-gating low priority work - use the power of -2 - documentation update (Michael Tullis) - first contact update - mailing list responsiveness - Python 2 --> Python 3 Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190710T1400 From scott.little at windriver.com Tue Jul 9 17:48:23 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 9 Jul 2019 13:48:23 -0400 Subject: [Starlingx-discuss] starlingx.cengn.ca is currently down Message-ID: <0a4bc291-c5f0-b35d-4361-7ba345d4b8f4@windriver.com> starlingx.cengn.ca is currently down. The problem has been reported and is being worked. I'll update you when we learn more. Scott From ada.cabrales at intel.com Tue Jul 9 18:49:53 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 9 Jul 2019 18:49:53 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 07/09/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CE54058@FMSMSX114.amr.corp.intel.com> Agenda for 07/09 Attendees: Bill, Fernando, Cristopher, JC, Jose, JP, Maria P, Ada, Elio 1. Sanity status - Cristopher Green Continues - the bug that was causing yellows last week has been fixed. Doing some improvements to the automation in order to have shorter times. Thanks for the records on the setup + provisioning + sanity times. Bill to remind Numan on collecting the info from WR environment. We will have to run sanity on the master and RC1 branch once this is done. Ada and Cristopher to plan on having the 2 runs daily. 2. Regression testing status - Elio, Numan Regression report - show manual + automation progress - Ada and Numan to define it. 273 test executed. 222 Pass, 15 failures, 149 pending. 36 blocked, mostly coming from SRIOV. 2 Related to simplex, 1 heat, 21 networking, 4 security (B&R). 8 System tests. Trunking, SRIOV and Data network are the most important things with problems. Plan is to finish first round of manual execution this week. There are still questions on some tests cases. An email will be sent asking for help. Waiting for feedback on IPv6 test. - Tests related to IPv6 will be run on Data network only. Automated tests - 50% of execution done. ~49% passing. Numan has updated the regression tracker. Please make sure to have the file updated for having accurate numbers. For the report, the pass rate will be calculated considering the total as (passed + failed). The report from today will be sent in this way. Blocked tests are not being considered for the pass rate calculation. This might affect the metric in the future, once these are not blocked anymore. There are two launchpads marked as 'incomplete', please take a look at and complete them https://bugs.launchpad.net/starlingx/+bug/1834083 https://bugs.launchpad.net/starlingx/+bug/1834245 Make sure of following the template by the time you create launchpads. 3. Feature testing status Report - https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1717644237 OpenStack patch elimination - 2 tests pending. Information was sent and is being reviewed. One launchpad opened yesterday related to displaying the sensors information. Cannot display host sensors list - https://bugs.launchpad.net/starlingx/+bug/1835829 Containers - 14 tests remaining. Waiting for information on 4. Planning to finish this week. Ask Numan for update offline. 4. Opens No opens. Everyone is eager to get back to work. Regards Ada From scott.little at windriver.com Tue Jul 9 19:20:59 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 9 Jul 2019 15:20:59 -0400 Subject: [Starlingx-discuss] starlingx.cengn.ca is currently down In-Reply-To: <0a4bc291-c5f0-b35d-4361-7ba345d4b8f4@windriver.com> References: <0a4bc291-c5f0-b35d-4361-7ba345d4b8f4@windriver.com> Message-ID: <84a1fd71-d899-310e-8de8-cea0f7449278@windriver.com> starlingx.cengn.ca is back up. An extended period of under-voltage to their server room drained their UPS and eventually brought everything down. Scott On 2019-07-09 1:48 p.m., Scott Little wrote: > starlingx.cengn.ca is currently down. > > The problem has been reported and is being worked. > > I'll update you when we learn more. > > Scott > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Jul 9 23:26:08 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 9 Jul 2019 23:26:08 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190709 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-09 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Jul 9 23:44:36 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 9 Jul 2019 23:44:36 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 7/09/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190705T013000Z" from (link) ---------------------------------------------------------------------- Overall Results: Total = 422 Pass = 223 Fail = 14 Blocked = 36 Not Run = 149 Total executed = 273 Pass Rate = 94.09% Formula used : Pass Rate = pass * 100 / (pass + fail) ---------------------------------------------------------------------- Results per Domain: Regression - AIO-SX 23 PASS | 1 FAIL | 2 BLOCKED Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM Regression - HA 2 Regression - Heat 12 PASS | 1 BLOCKED Regression - Horizon 4 PASS Regression - Install and Config 2 PASS Regression - Maintenance 5 PASS | 1 FAIL Regression - Networking 84 PASS | 8 FAIL | 21 BLOCKED Regression - Nova 2 PASS Regression - Security 32 PASS | 2 FAIL | 4 BLOCKED Regression - Storage Regression - Inventory 29 PASS | 1 FAIL System Test 12 PASS | 1 FAIL | 8 BLOCKED --------------------------------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Cannot create instances with SRIOV port https://bugs.launchpad.net/starlingx/+bug/1835318 Instance cannot create with network driver e1000 and rtl8139 https://bugs.launchpad.net/starlingx/+bug/1835300 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 virsh only listing one volume, even though there was an additional volume attached after instantiation https://bugs.launchpad.net/starlingx/+bug/1834194 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 Device UUID is missing when boot up VM with block device https://bugs.launchpad.net/starlingx/+bug/1835282 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 Reaching VM limit creation (10) virsh doesn't list all running instances https://bugs.launchpad.net/starlingx/+bug/1835853 ceph-mgr restful plugin error preventing platform-integ-apps from auto applying https://bugs.launchpad.net/starlingx/+bug/1835938 Instance created with a flat network spawns in error state https://bugs.launchpad.net/starlingx/+bug/1835965 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 Total Bugs: 16 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Jul 10 02:37:06 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 9 Jul 2019 22:37:06 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 17 - Failure! Message-ID: <1732535876.2.1562726229999.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 17 Status: Failure Timestamp: 20190710T015259Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190710T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190710T013000Z DOCKER_BUILD_ID: jenkins-master-20190710T013000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190710T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190710T013000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Wed Jul 10 03:36:51 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 9 Jul 2019 23:36:51 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 334 - Failure! Message-ID: <339844466.5.1562729812473.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 334 Status: Failure Timestamp: 20190710T015254Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190710T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190710T013000Z DOCKER_BUILD_ID: jenkins-master-20190710T013000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190710T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190710T013000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed Jul 10 03:36:54 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 9 Jul 2019 23:36:54 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 175 - Failure! Message-ID: <16543661.8.1562729815866.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 175 Status: Failure Timestamp: 20190710T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190710T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From austin.sun at intel.com Wed Jul 10 08:03:24 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 10 Jul 2019 08:03:24 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi All: New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. There are 11 packages not from centos but including python and may not be compatiable python2 and python3. Package | who is using openvswitch | ovs python-aniso8601 | keystone python-cephclient | ceph python-cephfs | ceph python-django-bash-completion | sysinv python-smartpm | standalone package python-unittest2 | sysinv python-XStatic-jquery-ui | stx-gui qemu-kvm-ev | mtce-compute requests-toolbelt | cgcs-patch-controller rpm-python | cgcs-patch-controller I will continue check those 11 packages . [1] https://bugs.launchpad.net/starlingx/+bug/1808073 Thanks. BR Austin Sun. -----Original Message----- From: Sun, Austin Sent: Thursday, July 4, 2019 11:43 AM To: Xie, Cindy ; Hu, Yong ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 Hi Cindy: Yes. we will do it and update sheet. Thanks. BR Austin Sun. -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, July 4, 2019 11:37 AM To: Hu, Yong ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 Austin, Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. Thanks. - cindy -----Original Message----- From: Yong Hu [mailto:yong.hu at intel.com] Sent: Thursday, July 4, 2019 11:25 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? In my view the best solution is to wait for CentOS 8.0 :-) On 03/07/2019 2:55 PM, Dean Troyer wrote: > On 7/3/19 4:07 PM, Saul Wold wrote: >> The current proposal seems to be to completely convert the base >> CentOS7.6 system level python to use python3, this carries a high >> risk factor as changing out all system-level python code could have a >> cascade effect on system functionality and additional dependencies. >> While > > Changing the distro/system Python version out from under the rest of > the distro seems like an enormous time sink, much less a significant > reliability risk. > >> A better solution would be to build python3 and the associated >> requirements from the existing RHEL EPEL (Extra Packages for >> Enterprise Linux) Source RPMs repo and install them into the ISO. >> This version correctly installs in a segregated directory tree. > > We would probably want to run a significant subset of the upstream > OpenStack testing on this combination as it is not (AFAIK) tested there. >  But this is true of any runtime + distro combination that is not in > the fairly short list of combinations that upstream OpenStack actively > tests. > >> Another option would be to delay the actual python2 conversion to >> StarlingX 4.0, the OpenStack Train release will still support python2. > > One downside to this is it leaves us no margin to defer the change > again, this is our second chance as it were.  OpenStack U (as of now) > is likely to drop py2 support as a guarantee across-the-board. > >> There is still work that is needed beyond the conversion of the >> python code itself to things like RPM specfiles data and other source >> code (such as, C code that has #includes of python2.7). It's not >> clear to me how much functional testing with python3 has occurred for >> the flock beyond what Dean has started with devstack. > > I managed to get the fault services running on py3, sysinv fell over > during the dbsync in my quick post-PTG trial run.  That is as far as I > took it.  Anyone who wants to try can pick out the local.conf I posted > [0] > > dt > > [0] http://paste.openstack.org/show/753844/ > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ezpeerchen at gmail.com Wed Jul 10 09:52:36 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Wed, 10 Jul 2019 17:52:36 +0800 Subject: [Starlingx-discuss] Question about Distributed Cloud on STX R1.0 Message-ID: Dear all, How to enable the Distributed Cloud feature on STX R1.0? I can't find any document about this feature. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Kozyrev at windriver.com Wed Jul 10 10:13:41 2019 From: Alex.Kozyrev at windriver.com (Kozyrev, Alexander (Alex)) Date: Wed, 10 Jul 2019 10:13:41 +0000 Subject: [Starlingx-discuss] EdgeX deeper integration? In-Reply-To: <852BD152-9C30-4F3A-8250-0134D99B1B83@intel.com> References: <9A85D2917C58154C960D95352B22818BD0778EFA@fmsmsx123.amr.corp.intel.com> <8b2eee427bc937014881a13cb45414bfb8c19443.camel@intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A82808@ALA-MBD.corp.ad.wrs.com> <852BD152-9C30-4F3A-8250-0134D99B1B83@intel.com> Message-ID: Of course, it could be optional Armada application in the StralingX repo to make things easy. But better yet I would contribute into EdgeX Foundry directly and create helm charts there. Regards, Alex -----Original Message----- From: Hu, Yong [mailto:yong.hu at intel.com] Sent: Tuesday, 09 July, 2019 11:53 To: Kozyrev, Alexander (Alex); Zvonar, Bill; Cordoba Malibran, Erich; Jones, Bruce E; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] EdgeX deeper integration? Alex, About "official EdgeX application", did you mean an Armada application with helm charts in the context of StarlingX? If so, I think it's worth making it as an "optional" application and sysadmin can install/apply it when needed. On 09/07/2019, 5:04 AM, "Kozyrev, Alexander (Alex)" wrote: StarlingX Akraino blueprint is focused on validation of EdgeX Foundry services running on StralingX far edge cluster by providing an integration CI/CD pipeline. And we definitely need some improvements in our current implementation. As you can see from our test_kube_edgex_services.py test script I used not really mature open-source project for EdgeX on Kubernetes [0] Creation of official EdgeX application would benefit to all involved projects. [0] https://github.com/rohitsardesai83/edgex-on-kubernetes -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Friday, 28 June, 2019 09:02 To: Cordoba Malibran, Erich; Jones, Bruce E; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] EdgeX deeper integration? There's definitely interest from the Akraino/StarlingX perspective. For those that aren't aware, the EdgeX Foundry application is part of Akraino's StarlingX blueprint [0]. As plans unfold for the next release of Akraino, we will be happy to have contributions in this area. [0] https://wiki.akraino.org/display/AK/StarlingX+Far+Edge+Distributed+Cloud -----Original Message----- From: Cordoba Malibran, Erich Sent: Thursday, June 27, 2019 5:01 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] EdgeX deeper integration? Actually, in the ongoing review for the pytest framework there is this setup of EdgeX in k8s. https://review.opendev.org/#/c/665419/3/automated-pytest-suite/testcases/functional/z_containers/test_kube_edgex_services.py This could be a good start point to create an EdgeX application. -Erich On Thu, 2019-06-27 at 19:54 +0000, Jones, Bruce E wrote: > We had an internal discussion today about EdgeX. We are seeing signs > of it increasing in use and importance in the Edge ecosystem. > > It is fairly straightforward to build and run an EdgeX application > under StarlingX today. We had it running in the Intel booth at the > Denver Summit. > > My question for the community is this: Is there value or interest in > making EdgeX apps even easier to run within StarlingX? For example, > we could create an EdgeX application in StarlingX and allow users to > apply it to the system, to allow the EdgeX services to run and be > managed by StarlingX. This would add some ease of use benefits for > EdgeX users while also putting us in the position of maintaining an up > to date version of EdgeX. > > Is this something we should work on as a community? > > brucej > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Jul 10 14:02:49 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 10 Jul 2019 14:02:49 +0000 Subject: [Starlingx-discuss] Notes: Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/10 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FDB994@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 7/10 meeting: - Ceph test status update (Abraham/Fernando) - skip due to the last blocking test; - test report: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1145711595 - 1 last blocking LP pending patch merge: https://review.opendev.org/#/c/661900/ from Tingjie. - stx 2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin, Alex/Bin) - stx.storage: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage 1835938: new bug, expect to retest it using new version; 1833738: mark it not gating. 1830938: PR already pending StarlingX-staging, backport from upstream: https://github.com/starlingx-staging/stx-ceph/pull/34, AR: Saul/Daniel to review it and help to merge it. 1827119: pending final +2 and merge. - stx.distro.others: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other 667811: verified by dumping out the database; request testing using the original setup. - Python2to3 plan review (Austin) - Storyboards: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.python2&project_group_id=86 - initial analysis for SB#2006158: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5276021/+files/rpm_python_status-stx.2.0_repoinfo_new.xlsx AR for Austin to send out follow up email to explain how to come down to 11 and then to 6. Let's focus on those 6 RPMs first; Open still there - no decision yet due to Brent is not in today's call: do we need to work on RPMs in CentOS repo by ourselves or waiting for CentOS 8.0 to be available? - Opens (all) Tingjie: Ceph containerlization: more technical details investigated. AR for Tingjie to share the doc to the mailing list so that community folks can review it. It needs to attach to the BP Tingjie submitted for review earlier. The recommendation from TSC for this feature is to break it down to phases so that we can merge part of features in 3.0 but not full. -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, July 9, 2019 8:51 PM To: starlingx-discuss at lists.starlingx.io; Rowsell, Brent ; Wold, Saul Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/10 Agenda for 7/10 meeting: - Ceph test status update (Abraham/Fernando) - skip due to the last blocking test; - stx 2.0 bug triage and review (Tingjie/Ovidiu/Daniel/Martin, Alex/Bin) - Python2to3 plan review (Austin) - Opens (all) Please add topics if you'd like to bring to the sub-project for discussion. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 10, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From austin.sun at intel.com Wed Jul 10 14:03:01 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 10 Jul 2019 14:03:01 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi All: The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. As sync in non-OpenStack distro meeting. We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . So below 6 packages are coming 3rd party which might be not python2to3 compliance. Package | who is using openvswitch | ovs python-cephfs | ceph python-smartpm | standalone package qemu-kvm-ev | mtce-compute requests-toolbelt | cgcs-patch-controller rpm-python | cgcs-patch-controller Thanks. BR Austin Sun. -----Original Message----- From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Wednesday, July 10, 2019 4:03 PM To: Xie, Cindy ; Hu, Yong ; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] Python2 -> Python3 Hi All: New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. There are 11 packages not from centos but including python and may not be compatiable python2 and python3. Package | who is using openvswitch | ovs python-aniso8601 | keystone python-cephclient | ceph python-cephfs | ceph python-django-bash-completion | sysinv python-smartpm | standalone package python-unittest2 | sysinv python-XStatic-jquery-ui | stx-gui qemu-kvm-ev | mtce-compute requests-toolbelt | cgcs-patch-controller rpm-python | cgcs-patch-controller I will continue check those 11 packages . [1] https://bugs.launchpad.net/starlingx/+bug/1808073 Thanks. BR Austin Sun. -----Original Message----- From: Sun, Austin Sent: Thursday, July 4, 2019 11:43 AM To: Xie, Cindy ; Hu, Yong ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 Hi Cindy: Yes. we will do it and update sheet. Thanks. BR Austin Sun. -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, July 4, 2019 11:37 AM To: Hu, Yong ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 Austin, Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. Thanks. - cindy -----Original Message----- From: Yong Hu [mailto:yong.hu at intel.com] Sent: Thursday, July 4, 2019 11:25 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? In my view the best solution is to wait for CentOS 8.0 :-) On 03/07/2019 2:55 PM, Dean Troyer wrote: > On 7/3/19 4:07 PM, Saul Wold wrote: >> The current proposal seems to be to completely convert the base >> CentOS7.6 system level python to use python3, this carries a high >> risk factor as changing out all system-level python code could have a >> cascade effect on system functionality and additional dependencies. >> While > > Changing the distro/system Python version out from under the rest of > the distro seems like an enormous time sink, much less a significant > reliability risk. > >> A better solution would be to build python3 and the associated >> requirements from the existing RHEL EPEL (Extra Packages for >> Enterprise Linux) Source RPMs repo and install them into the ISO. >> This version correctly installs in a segregated directory tree. > > We would probably want to run a significant subset of the upstream > OpenStack testing on this combination as it is not (AFAIK) tested there. >  But this is true of any runtime + distro combination that is not in > the fairly short list of combinations that upstream OpenStack actively > tests. > >> Another option would be to delay the actual python2 conversion to >> StarlingX 4.0, the OpenStack Train release will still support python2. > > One downside to this is it leaves us no margin to defer the change > again, this is our second chance as it were.  OpenStack U (as of now) > is likely to drop py2 support as a guarantee across-the-board. > >> There is still work that is needed beyond the conversion of the >> python code itself to things like RPM specfiles data and other source >> code (such as, C code that has #includes of python2.7). It's not >> clear to me how much functional testing with python3 has occurred for >> the flock beyond what Dean has started with devstack. > > I managed to get the fault services running on py3, sysinv fell over > during the dbsync in my quick post-PTG trial run.  That is as far as I > took it.  Anyone who wants to try can pick out the local.conf I posted > [0] > > dt > > [0] http://paste.openstack.org/show/753844/ > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Jul 10 14:41:37 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 10 Jul 2019 14:41:37 +0000 Subject: [Starlingx-discuss] bug severity and priority Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> Bill/Ghada, I am sending out my definition of bug severity and priority: Bug Exposure or Severity Definition 1- Critical Product or key feature is not usable for intended purpose. 2- High Product or key feature is not reliably usable for intended purpose or use is significantly impaired 3 - Medium Product or key feature is usable provided by a workaround 4 - Low Tolerable impact to user experience with minimal service and support costs Bug Priority Definition P1 - Stopper Resolution of this defect takes precedence over other defects and most other development activities. This level is used to focus maximum development team resources to resolve a defect in the shortest possible timeframe. P2 - High Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. The urgency to fix a P2 priority defect is imminent. - P2 priority defects are intended to be resolved by the next planned external release of the software. P3 - Medium Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. - P3 priority defects must have a planned timeframe for a verified resolution. P4 - Low Resolution of the defect has least urgency to resolve, P4 priority defects may or may not have plans to resolve. Let's discuss this and agree how we'd like to use them. My suggestion for current "Medium" is to we can mark them as "stx.3.0" and then in the beginning of stx.3, they can move Priority to "high" due to the fact they want to get them fixed in 3.0. But the bug severity should never change because they are standard. Thx. - cindy From Bill.Zvonar at windriver.com Wed Jul 10 15:32:20 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 10 Jul 2019 15:32:20 +0000 Subject: [Starlingx-discuss] Community Call (July 10, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A86F57@ALA-MBD.corp.ad.wrs.com> >From today's call... - reviews in need of attention? - Cindy: blocking last Ceph upgrade testcase, waiting for a 2nd +2 - https://review.opendev.org/#/c/661900/ - Don noted that there are some outstanding comments from Ovidiu that need response; Cindy noted that they're working on those - Yong noted another one - Don has just +2'd it - https://review.opendev.org/#/c/657535 - Cindy noted some other patches received from Hao Wang - defer non-gating low priority work - use the power of -2 - Don noted that the -2 should be accompanied by a comment stating why it's being deferred, the -2 shouldn't become a default action - defect trend / gating launchpads - updated forecast as of end of last week: https://docs.google.com/spreadsheets/d/1DZZgqrCIL6wxv51_yFBk6Lfmtf1AqPD6z7e5hEs3prU/edit#gid=300550657 - dealing with the backlog on a per domain basis - rationale for marking something as *not* gating 2.0 - Ghada suggested one valid option is to move it to 3.0 (leaving the other fields intact) - Cindy noted that she needs to manage expectations on how many issues her team can resolve, given the # of resources - Cindy noted that Priority != Severity - ACTION: Cindy to send her perspective on "severity" to the mailing list, let the discussion ensue - Yong agreed with Ghada's point re: assessing the impact from different perspectives - user, program, technical, etc. - ACTION: Yong to propose how we could formalize this as a process (kinda thing) - documentation update (Michael Tullis) - highlight recent improvements/merges this week and will clarify things for the community about: - single source of truth --> where to find the latest and how to contribute - https://docs.starlingx.io - status of wiki - links to open and active Gerrit reviews - Bart noted that some wikis have not been updated to point to https://docs.starlingx.io - Mike said they'd do an audit of all the wikis - ACTION: Doc team to do an audit of the wikis to find pages that have stale data and/or isn't properly pointing to the docs site - currently, the doc team is working a plan - Python 2 --> Python 3, per thread http://lists.starlingx.io/pipermail/starlingx-discuss/2019-July/005210.html - Dean highlighted that the current plan is to *not* transition to Python 3 in Stx 3.0 (see the thread for details) - this will be reviewed as part of the overal 3.0 discussion at the TSC - we didn't get to these today... - first contact update - mailing list responsiveness - see https://etherpad.openstack.org/p/stx-first-contact (at the bottom) - bitergia update: see https://etherpad.openstack.org/p/stx-bitergia - open actions from previous meetings... - updates pending: - ACTION: release team make the recommendation re: Blueprints for Backlog in the next TSC meeting - pending - ACTION: Numan & Ada to sort out how aggregate regression reporting will be done (manual & automated) - no update. Looking for a time to meet with Numan and cover this. - ACTION: Bill start checking if any 'new' people emails are going unresponded - see update here (at bottom): https://etherpad.openstack.org/p/stx-first-contact - ACTION: Scott & Dean to talk about the mechanics for big files - Scott's update? - ACTION: Dean find out what our options for increasing per mail size limit - ACTION: Bill follow up on status of bitergia changes - see Thierry's updates here: https://etherpad.openstack.org/p/stx-bitergia - for later: - ACTION: Frank update on the forecast for the Docker image list - see https://bugs.launchpad.net/starlingx/+bug/1834504, with build team now - ACTION: Frank to talk to CENGN about getting sufficient space (pending any other parameters from Scott) - ACTION: Numan/Yang arrange an automation framweork info session for the Community (in a few weeks after Yang's vacation - ACTION: Bill check with Ian about the logistics/timing of a mid-cycle meeting -----Original Message----- From: Zvonar, Bill Sent: Tuesday, July 9, 2019 12:15 PM To: starlingx-discuss at lists.starlingx.io Subject: Community Call (July 10, 2019) Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - defect trend / gating launchpads - defer non-gating low priority work - use the power of -2 - documentation update (Michael Tullis) - first contact update - mailing list responsiveness - Python 2 --> Python 3 Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190710T1400 From scott.little at windriver.com Wed Jul 10 16:35:51 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Jul 2019 12:35:51 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 17 - Failure! In-Reply-To: <1732535876.2.1562726229999.JavaMail.javamailuser@localhost> References: <1732535876.2.1562726229999.JavaMail.javamailuser@localhost> Message-ID: <6ddd7a75-e0b7-71f8-5119-d4f2a07f5d70@windriver.com> Please disregard this. One of the mounts failed to come back after CENGN's power failure. Scott On 2019-07-09 10:37 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_lst_audit > Build #: 17 > Status: Failure > Timestamp: 20190710T015259Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190710T013000Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190710T013000Z > DOCKER_BUILD_ID: jenkins-master-20190710T013000Z-builder > OS: centos > MY_REPO: /localdisk/designer/jenkins/master/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190710T013000Z/logs > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190710T013000Z/logs > MASTER_JOB_NAME: STX_build_master_master > MY_REPO_ROOT: /localdisk/designer/jenkins/master > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jul 10 16:36:23 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Jul 2019 12:36:23 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 334 - Failure! In-Reply-To: <339844466.5.1562729812473.JavaMail.javamailuser@localhost> References: <339844466.5.1562729812473.JavaMail.javamailuser@localhost> Message-ID: One of the mounts failed to come back after the power failure.  It has been fixed. I've launched a new build. Scott On 2019-07-09 11:36 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_pre_installer > Build #: 334 > Status: Failure > Timestamp: 20190710T015254Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190710T013000Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190710T013000Z > DOCKER_BUILD_ID: jenkins-master-20190710T013000Z-builder > OS: centos > MY_REPO: /localdisk/designer/jenkins/master/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190710T013000Z/logs > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190710T013000Z/logs > MASTER_JOB_NAME: STX_build_master_master > MY_REPO_ROOT: /localdisk/designer/jenkins/master > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jul 10 17:38:35 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 10 Jul 2019 17:38:35 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> Hi Cindy, Thanks for sending this, I think this gives us something to start the discussion. However we decide to align on severity/priority (I'll comment on that more later, need to think about it more), I think we need to be careful before we move all mediums to 3.0, it may be too much of a Gordian knot solution. I think we need to assess the mediums (as Yong suggested earlier) to say why they should or should not be in 2.0. I also think this may help us sort out what our gating criteria are. Bill... -----Original Message----- From: Xie, Cindy Sent: Wednesday, July 10, 2019 10:42 AM To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill ; Khalil, Ghada Subject: bug severity and priority Bill/Ghada, I am sending out my definition of bug severity and priority: Bug Exposure or Severity Definition 1- Critical Product or key feature is not usable for intended purpose. 2- High Product or key feature is not reliably usable for intended purpose or use is significantly impaired 3 - Medium Product or key feature is usable provided by a workaround 4 - Low Tolerable impact to user experience with minimal service and support costs Bug Priority Definition P1 - Stopper Resolution of this defect takes precedence over other defects and most other development activities. This level is used to focus maximum development team resources to resolve a defect in the shortest possible timeframe. P2 - High Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. The urgency to fix a P2 priority defect is imminent. - P2 priority defects are intended to be resolved by the next planned external release of the software. P3 - Medium Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. - P3 priority defects must have a planned timeframe for a verified resolution. P4 - Low Resolution of the defect has least urgency to resolve, P4 priority defects may or may not have plans to resolve. Let's discuss this and agree how we'd like to use them. My suggestion for current "Medium" is to we can mark them as "stx.3.0" and then in the beginning of stx.3, they can move Priority to "high" due to the fact they want to get them fixed in 3.0. But the bug severity should never change because they are standard. Thx. - cindy From sgw at linux.intel.com Wed Jul 10 20:22:32 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 10 Jul 2019 13:22:32 -0700 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> Message-ID: <70a1fab7-c9b8-f20f-4293-70b39e9926fa@linux.intel.com> On 7/10/19 10:38 AM, Zvonar, Bill wrote: > Hi Cindy, > > Thanks for sending this, I think this gives us something to start the discussion. > +1, we went through a very similar community process with the Yocto Project early on, everyone has their ideas of Priority/Severity and who sets what and when. We will work through this. > However we decide to align on severity/priority (I'll comment on that more later, need to think about it more), I think we need to be careful before we move all mediums to 3.0, it may be too much of a Gordian knot solution. > > I think we need to assess the mediums (as Yong suggested earlier) to say why they should or should not be in 2.0. I also think this may help us sort out what our gating criteria are. > I agree, we need to take a measured approach to the existing mediums and determine if they are truly 2.0 gating, in which case elevate them, the rest could/should be marked for 3.0. As I mentioned on the phone, having a burn-down chart for the Critical (1) and Highs (16) currently might help, as we should be approaching 0 on these existing ones entering RC1. Sau! > Bill... > > -----Original Message----- > From: Xie, Cindy > Sent: Wednesday, July 10, 2019 10:42 AM > To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill ; Khalil, Ghada > Subject: bug severity and priority > > Bill/Ghada, > I am sending out my definition of bug severity and priority: > > Bug Exposure or Severity Definition > 1- Critical Product or key feature is not usable for intended purpose. > 2- High Product or key feature is not reliably usable for intended purpose or use is significantly impaired > 3 - Medium Product or key feature is usable provided by a workaround > 4 - Low Tolerable impact to user experience with minimal service and support costs > > Bug Priority Definition > P1 - Stopper Resolution of this defect takes precedence over other defects and most other development activities. This level is used to focus maximum development team resources to resolve a defect in the shortest possible timeframe. > P2 - High Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. The urgency to fix a P2 priority defect is imminent. - P2 priority defects are intended to be resolved by the next planned external release of the software. > P3 - Medium Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. - P3 priority defects must have a planned timeframe for a verified resolution. > P4 - Low Resolution of the defect has least urgency to resolve, P4 priority defects may or may not have plans to resolve. > > Let's discuss this and agree how we'd like to use them. My suggestion for current "Medium" is to we can mark them as "stx.3.0" and then in the beginning of stx.3, they can move Priority to "high" due to the fact they want to get them fixed in 3.0. > > But the bug severity should never change because they are standard. > > Thx. - cindy > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From erich.cordoba.malibran at intel.com Wed Jul 10 21:36:49 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Wed, 10 Jul 2019 21:36:49 +0000 Subject: [Starlingx-discuss] [containers] How can I do a pull of Docker images with keystone In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2B35BE72@FMSMSX125.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2B35BE72@FMSMSX125.amr.corp.intel.com> Message-ID: I'm having the same issue trying to use the registry in controller-0. In a standard configuration, from a worker I run this: $ docker pull registry.local:9001/calico/node:v3.6.2 Error response from daemon: Get https://registry.local:9001/v2/calico/node/manifests/v3.6.2: unauthorized: authentication required Also, using curl and specifying the certificate: $ sudo curl --cacert /etc/docker/certs.d/registry.local:9001/registry-cert.crt -v -X GET https://registry.local:9001/v2/_catalog {"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"registry","Class":"","Name":"catalog","Action":"*"}]}]} It seems that the authentication should be performed with the registry- token-server. So, it is possible to use the registry in controller-0 from command line or is only intended to be used by the configuration tools? Thanks -Erich On Mon, 2019-07-08 at 21:20 +0000, Perez Carranza, Jose wrote: > Hi Jerry > > I was trying to do a pull of an image and according to this patch > [1] I'm required a Keystone token to get access to the registry. Are > you able to explain what are the steps that I need to do to pull of > an image on the local registry using correct keystone > authentication. > > ========== > sudo docker -D pull registry.local:9001/docker.io/kolla/ubuntu- > source-nova-novncproxy > Using default tag: latest > Error response from daemon: Get https://registry.local:9001/v2/docker > .io/kolla/ubuntu-source-nova-novncproxy/manifests/latest: > unauthorized: authentication required > =========== > > 1. https://review.opendev.org/#/c/626355/ > > Regards, > José > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From michael.l.tullis at intel.com Wed Jul 10 21:41:24 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 10 Jul 2019 21:41:24 +0000 Subject: [Starlingx-discuss] [docs] update on broken links from https://docs.starlingx.io/ Message-ID: <3808363B39586544A6839C76CF81445EA1B914F5@ORSMSX104.amr.corp.intel.com> Early this morning, we had a PR merge to improve and consolidate left-pane navigation on https://docs.starlingx.io/, but it had unintended consequences for links from some of the landing pages. That PR was https://review.opendev.org/669933. Strangely, our validation on the Zuul build showed (and still shows) no broken links for that PR, but the actual live merged page did have issues. This is now fixed with the recently merged https://review.opendev.org/#/c/670153/. Thx. -- Mike and team -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Jul 10 22:35:55 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 10 Jul 2019 22:35:55 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 7/10/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B925E2@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jul 10 23:13:05 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 10 Jul 2019 23:13:05 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> Bill, I definitely agree that not all Medium shall be pushed to stx.3.0, this needs to be assessed carefully. But if we combine the severity and priority together, then this decision needs to put resource factor in consideration as well. Actually, I think it's confusing of calling individual LP "gating" - I understand that we want to get the product quality to a good shape and want to get bugs fixed as many as possible before we ship it. I will suggest to use defects# as part of release criteria (QRC). Example could be: Number of Critical P1 defects Zero Number of High P2 defects < x Number of Medium P3 defects < y And the only thing we need to agree on is the "x" and "y". It makes TSC or release team to make decision easier. The QRC needs to be agreed earlier instead of right before the release decision shall be made. This way, we can really direct our engineering resource working on the most important items and we all have an agreed common goal. Thanks. - cindy -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Thursday, July 11, 2019 1:39 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io; Khalil, Ghada Subject: RE: bug severity and priority Hi Cindy, Thanks for sending this, I think this gives us something to start the discussion. However we decide to align on severity/priority (I'll comment on that more later, need to think about it more), I think we need to be careful before we move all mediums to 3.0, it may be too much of a Gordian knot solution. I think we need to assess the mediums (as Yong suggested earlier) to say why they should or should not be in 2.0. I also think this may help us sort out what our gating criteria are. Bill... -----Original Message----- From: Xie, Cindy Sent: Wednesday, July 10, 2019 10:42 AM To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill ; Khalil, Ghada Subject: bug severity and priority Bill/Ghada, I am sending out my definition of bug severity and priority: Bug Exposure or Severity Definition 1- Critical Product or key feature is not usable for intended purpose. 2- High Product or key feature is not reliably usable for intended purpose or use is significantly impaired 3 - Medium Product or key feature is usable provided by a workaround 4 - Low Tolerable impact to user experience with minimal service and support costs Bug Priority Definition P1 - Stopper Resolution of this defect takes precedence over other defects and most other development activities. This level is used to focus maximum development team resources to resolve a defect in the shortest possible timeframe. P2 - High Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. The urgency to fix a P2 priority defect is imminent. - P2 priority defects are intended to be resolved by the next planned external release of the software. P3 - Medium Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. - P3 priority defects must have a planned timeframe for a verified resolution. P4 - Low Resolution of the defect has least urgency to resolve, P4 priority defects may or may not have plans to resolve. Let's discuss this and agree how we'd like to use them. My suggestion for current "Medium" is to we can mark them as "stx.3.0" and then in the beginning of stx.3, they can move Priority to "high" due to the fact they want to get them fixed in 3.0. But the bug severity should never change because they are standard. Thx. - cindy From maria.g.perez.ibarra at intel.com Thu Jul 11 03:12:40 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 11 Jul 2019 03:12:40 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190710 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] 1TCs Fail TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] 1TCs Fail TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Thu Jul 11 09:12:26 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Thu, 11 Jul 2019 17:12:26 +0800 Subject: [Starlingx-discuss] How to configure a non-accelerated data interface using the system CLI on STX R1.0? Message-ID: Dear all, Environment: STX R1.0 (2018/10) all-in-one simplex How to configure a non-accelerated data interface using the system CLI? I use this system command to turn-off ovs-dpdk, but it seems not working. And it cause system configuration fail. #system modify --vswitch_type=ovs Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Thu Jul 11 12:15:47 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Thu, 11 Jul 2019 12:15:47 +0000 Subject: [Starlingx-discuss] [containers] How can I do a pull of Docker images with keystone In-Reply-To: References: <0A5D9A624DF90343892F8F3FE7DE525A2B35BE72@FMSMSX125.amr.corp.intel.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2B35C624@FMSMSX125.amr.corp.intel.com> Hi Erich I already received the info from Jerry, seems like we missed to continue the conversation on this tread. Below you can find the answer: =========== Hi José, In order to pull Docker image with keystone authentication, please first run docker login: sudo docker login registry.local:9001 It will ask for your credentials at this point. Use your keystone credentials. After that, you can just use docker pull. We currently allow the "admin" user to interact with all repos while regular users can only interact with their own. In your example, only "admin" and " docker.io " users can pull registry.local:9001/docker.io/kolla/ubuntu-source-nova-novncproxy Thanks, Jerry ============ Regards, José > -----Original Message----- > From: Cordoba Malibran, Erich > Sent: Wednesday, July 10, 2019 4:37 PM > To: Perez Carranza, Jose ; starlingx- > discuss at lists.starlingx.io; jerry.sun at windriver.com > Subject: Re: [Starlingx-discuss] [containers] How can I do a pull of Docker > images with keystone > > I'm having the same issue trying to use the registry in controller-0. > In a standard configuration, from a worker I run this: > > $ docker pull registry.local:9001/calico/node:v3.6.2 > Error response from daemon: Get > https://registry.local:9001/v2/calico/node/manifests/v3.6.2: unauthorized: > authentication required > > Also, using curl and specifying the certificate: > > $ sudo curl --cacert /etc/docker/certs.d/registry.local:9001/registry-cert.crt -v - > X GET https://registry.local:9001/v2/_catalog > {"errors":[{"code":"UNAUTHORIZED","message":"authentication > required","detail":[{"Type":"registry","Class":"","Name":"catalog","Action":"*"}] > }]} > > It seems that the authentication should be performed with the registry- token- > server. So, it is possible to use the registry in controller-0 from command line > or is only intended to be used by the configuration tools? > > Thanks > > -Erich > > On Mon, 2019-07-08 at 21:20 +0000, Perez Carranza, Jose wrote: > > Hi Jerry > > > > I was trying to do a pull of an image and according to this patch [1] > > I'm required a Keystone token to get access to the registry. Are you > > able to explain what are the steps that I need to do to pull of an > > image on the local registry using correct keystone authentication. > > > > ========== > > sudo docker -D pull registry.local:9001/docker.io/kolla/ubuntu- > > source-nova-novncproxy > > Using default tag: latest > > Error response from daemon: Get https://registry.local:9001/v2/docker > > .io/kolla/ubuntu-source-nova-novncproxy/manifests/latest: > > unauthorized: authentication required > > =========== > > > > 1. https://review.opendev.org/#/c/626355/ > > > > Regards, > > José > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Thu Jul 11 19:39:52 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 11 Jul 2019 12:39:52 -0700 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> Message-ID: <5c77d8a6-de6f-0b3a-10bd-9edb0ece08eb@linux.intel.com> On 7/10/19 7:03 AM, Sun, Austin wrote: > Hi All: > The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 > Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. > As sync in non-OpenStack distro meeting. > We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . > So below 6 packages are coming 3rd party which might be not python2to3 compliance. > > Package | who is using > openvswitch | ovs > python-cephfs | ceph > python-smartpm | standalone package > qemu-kvm-ev | mtce-compute > requests-toolbelt | cgcs-patch-controller > rpm-python | cgcs-patch-controller Can you identify replacement python3 packages for any of these. I know we found out that smartpm is used for the patch process, I know that smartpm is also an older project that does not have any upstream support any further, so that will require a fair amount of work. Sau! > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Sun, Austin [mailto:austin.sun at intel.com] > Sent: Wednesday, July 10, 2019 4:03 PM > To: Xie, Cindy ; Hu, Yong ; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > Hi All: > New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. > There are 11 packages not from centos but including python and may not be compatiable python2 and python3. > Package | who is using > openvswitch | ovs > python-aniso8601 | keystone > python-cephclient | ceph > python-cephfs | ceph > python-django-bash-completion | sysinv > python-smartpm | standalone package > python-unittest2 | sysinv > python-XStatic-jquery-ui | stx-gui > qemu-kvm-ev | mtce-compute > requests-toolbelt | cgcs-patch-controller > rpm-python | cgcs-patch-controller > > > I will continue check those 11 packages . > > [1] https://bugs.launchpad.net/starlingx/+bug/1808073 > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Sun, Austin > Sent: Thursday, July 4, 2019 11:43 AM > To: Xie, Cindy ; Hu, Yong ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > Hi Cindy: > Yes. we will do it and update sheet. > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Xie, Cindy [mailto:cindy.xie at intel.com] > Sent: Thursday, July 4, 2019 11:37 AM > To: Hu, Yong ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > Austin, > Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: > > In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. > > Thanks. - cindy > > -----Original Message----- > From: Yong Hu [mailto:yong.hu at intel.com] > Sent: Thursday, July 4, 2019 11:25 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? > > In my view the best solution is to wait for CentOS 8.0 :-) > > > On 03/07/2019 2:55 PM, Dean Troyer wrote: >> On 7/3/19 4:07 PM, Saul Wold wrote: >>> The current proposal seems to be to completely convert the base >>> CentOS7.6 system level python to use python3, this carries a high >>> risk factor as changing out all system-level python code could have a >>> cascade effect on system functionality and additional dependencies. >>> While >> >> Changing the distro/system Python version out from under the rest of >> the distro seems like an enormous time sink, much less a significant >> reliability risk. >> >>> A better solution would be to build python3 and the associated >>> requirements from the existing RHEL EPEL (Extra Packages for >>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>> This version correctly installs in a segregated directory tree. >> >> We would probably want to run a significant subset of the upstream >> OpenStack testing on this combination as it is not (AFAIK) tested there. >>  But this is true of any runtime + distro combination that is not in >> the fairly short list of combinations that upstream OpenStack actively >> tests. >> >>> Another option would be to delay the actual python2 conversion to >>> StarlingX 4.0, the OpenStack Train release will still support python2. >> >> One downside to this is it leaves us no margin to defer the change >> again, this is our second chance as it were.  OpenStack U (as of now) >> is likely to drop py2 support as a guarantee across-the-board. >> >>> There is still work that is needed beyond the conversion of the >>> python code itself to things like RPM specfiles data and other source >>> code (such as, C code that has #includes of python2.7). It's not >>> clear to me how much functional testing with python3 has occurred for >>> the flock beyond what Dean has started with devstack. >> >> I managed to get the fault services running on py3, sysinv fell over >> during the dbsync in my quick post-PTG trial run.  That is as far as I >> took it.  Anyone who wants to try can pick out the local.conf I posted >> [0] >> >> dt >> >> [0] http://paste.openstack.org/show/753844/ >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Don.Penney at windriver.com Thu Jul 11 19:50:11 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 11 Jul 2019 19:50:11 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: <5c77d8a6-de6f-0b3a-10bd-9edb0ece08eb@linux.intel.com> References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> <5c77d8a6-de6f-0b3a-10bd-9edb0ece08eb@linux.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC1502973@ALA-MBD.corp.ad.wrs.com> It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? I can also look at the current use of the rpm module in patching and look for alternatives. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, July 11, 2019 3:40 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 On 7/10/19 7:03 AM, Sun, Austin wrote: > Hi All: > The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 > Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. > As sync in non-OpenStack distro meeting. > We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . > So below 6 packages are coming 3rd party which might be not python2to3 compliance. > > Package | who is using > openvswitch | ovs > python-cephfs | ceph > python-smartpm | standalone package > qemu-kvm-ev | mtce-compute > requests-toolbelt | cgcs-patch-controller > rpm-python | cgcs-patch-controller Can you identify replacement python3 packages for any of these. I know we found out that smartpm is used for the patch process, I know that smartpm is also an older project that does not have any upstream support any further, so that will require a fair amount of work. Sau! > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Sun, Austin [mailto:austin.sun at intel.com] > Sent: Wednesday, July 10, 2019 4:03 PM > To: Xie, Cindy ; Hu, Yong ; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > Hi All: > New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. > There are 11 packages not from centos but including python and may not be compatiable python2 and python3. > Package | who is using > openvswitch | ovs > python-aniso8601 | keystone > python-cephclient | ceph > python-cephfs | ceph > python-django-bash-completion | sysinv > python-smartpm | standalone package > python-unittest2 | sysinv > python-XStatic-jquery-ui | stx-gui > qemu-kvm-ev | mtce-compute > requests-toolbelt | cgcs-patch-controller > rpm-python | cgcs-patch-controller > > > I will continue check those 11 packages . > > [1] https://bugs.launchpad.net/starlingx/+bug/1808073 > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Sun, Austin > Sent: Thursday, July 4, 2019 11:43 AM > To: Xie, Cindy ; Hu, Yong ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > Hi Cindy: > Yes. we will do it and update sheet. > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Xie, Cindy [mailto:cindy.xie at intel.com] > Sent: Thursday, July 4, 2019 11:37 AM > To: Hu, Yong ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > Austin, > Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: > > In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. > > Thanks. - cindy > > -----Original Message----- > From: Yong Hu [mailto:yong.hu at intel.com] > Sent: Thursday, July 4, 2019 11:25 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? > > In my view the best solution is to wait for CentOS 8.0 :-) > > > On 03/07/2019 2:55 PM, Dean Troyer wrote: >> On 7/3/19 4:07 PM, Saul Wold wrote: >>> The current proposal seems to be to completely convert the base >>> CentOS7.6 system level python to use python3, this carries a high >>> risk factor as changing out all system-level python code could have a >>> cascade effect on system functionality and additional dependencies. >>> While >> >> Changing the distro/system Python version out from under the rest of >> the distro seems like an enormous time sink, much less a significant >> reliability risk. >> >>> A better solution would be to build python3 and the associated >>> requirements from the existing RHEL EPEL (Extra Packages for >>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>> This version correctly installs in a segregated directory tree. >> >> We would probably want to run a significant subset of the upstream >> OpenStack testing on this combination as it is not (AFAIK) tested there. >>  But this is true of any runtime + distro combination that is not in >> the fairly short list of combinations that upstream OpenStack actively >> tests. >> >>> Another option would be to delay the actual python2 conversion to >>> StarlingX 4.0, the OpenStack Train release will still support python2. >> >> One downside to this is it leaves us no margin to defer the change >> again, this is our second chance as it were.  OpenStack U (as of now) >> is likely to drop py2 support as a guarantee across-the-board. >> >>> There is still work that is needed beyond the conversion of the >>> python code itself to things like RPM specfiles data and other source >>> code (such as, C code that has #includes of python2.7). It's not >>> clear to me how much functional testing with python3 has occurred for >>> the flock beyond what Dean has started with devstack. >> >> I managed to get the fault services running on py3, sysinv fell over >> during the dbsync in my quick post-PTG trial run.  That is as far as I >> took it.  Anyone who wants to try can pick out the local.conf I posted >> [0] >> >> dt >> >> [0] http://paste.openstack.org/show/753844/ >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Thu Jul 11 21:11:36 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 11 Jul 2019 21:11:36 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> <5c77d8a6-de6f-0b3a-10bd-9edb0ece08eb@linux.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC15029D8@ALA-MBD.corp.ad.wrs.com> I think I can use this module in place of the rpm one: https://pypi.org/project/version_utils/ It looks like this provides an equivalent to rpm.labelCompare that should allow me to drop rpm-python. -----Original Message----- From: Penney, Don Sent: Thursday, July 11, 2019 3:50 PM To: 'Saul Wold'; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? I can also look at the current use of the rpm module in patching and look for alternatives. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, July 11, 2019 3:40 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 On 7/10/19 7:03 AM, Sun, Austin wrote: > Hi All: > The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 > Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. > As sync in non-OpenStack distro meeting. > We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . > So below 6 packages are coming 3rd party which might be not python2to3 compliance. > > Package | who is using > openvswitch | ovs > python-cephfs | ceph > python-smartpm | standalone package > qemu-kvm-ev | mtce-compute > requests-toolbelt | cgcs-patch-controller > rpm-python | cgcs-patch-controller Can you identify replacement python3 packages for any of these. I know we found out that smartpm is used for the patch process, I know that smartpm is also an older project that does not have any upstream support any further, so that will require a fair amount of work. Sau! > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Sun, Austin [mailto:austin.sun at intel.com] > Sent: Wednesday, July 10, 2019 4:03 PM > To: Xie, Cindy ; Hu, Yong ; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > Hi All: > New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. > There are 11 packages not from centos but including python and may not be compatiable python2 and python3. > Package | who is using > openvswitch | ovs > python-aniso8601 | keystone > python-cephclient | ceph > python-cephfs | ceph > python-django-bash-completion | sysinv > python-smartpm | standalone package > python-unittest2 | sysinv > python-XStatic-jquery-ui | stx-gui > qemu-kvm-ev | mtce-compute > requests-toolbelt | cgcs-patch-controller > rpm-python | cgcs-patch-controller > > > I will continue check those 11 packages . > > [1] https://bugs.launchpad.net/starlingx/+bug/1808073 > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Sun, Austin > Sent: Thursday, July 4, 2019 11:43 AM > To: Xie, Cindy ; Hu, Yong ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > Hi Cindy: > Yes. we will do it and update sheet. > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Xie, Cindy [mailto:cindy.xie at intel.com] > Sent: Thursday, July 4, 2019 11:37 AM > To: Hu, Yong ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > Austin, > Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: > > In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. > > Thanks. - cindy > > -----Original Message----- > From: Yong Hu [mailto:yong.hu at intel.com] > Sent: Thursday, July 4, 2019 11:25 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? > > In my view the best solution is to wait for CentOS 8.0 :-) > > > On 03/07/2019 2:55 PM, Dean Troyer wrote: >> On 7/3/19 4:07 PM, Saul Wold wrote: >>> The current proposal seems to be to completely convert the base >>> CentOS7.6 system level python to use python3, this carries a high >>> risk factor as changing out all system-level python code could have a >>> cascade effect on system functionality and additional dependencies. >>> While >> >> Changing the distro/system Python version out from under the rest of >> the distro seems like an enormous time sink, much less a significant >> reliability risk. >> >>> A better solution would be to build python3 and the associated >>> requirements from the existing RHEL EPEL (Extra Packages for >>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>> This version correctly installs in a segregated directory tree. >> >> We would probably want to run a significant subset of the upstream >> OpenStack testing on this combination as it is not (AFAIK) tested there. >>  But this is true of any runtime + distro combination that is not in >> the fairly short list of combinations that upstream OpenStack actively >> tests. >> >>> Another option would be to delay the actual python2 conversion to >>> StarlingX 4.0, the OpenStack Train release will still support python2. >> >> One downside to this is it leaves us no margin to defer the change >> again, this is our second chance as it were.  OpenStack U (as of now) >> is likely to drop py2 support as a guarantee across-the-board. >> >>> There is still work that is needed beyond the conversion of the >>> python code itself to things like RPM specfiles data and other source >>> code (such as, C code that has #includes of python2.7). It's not >>> clear to me how much functional testing with python3 has occurred for >>> the flock beyond what Dean has started with devstack. >> >> I managed to get the fault services running on py3, sysinv fell over >> during the dbsync in my quick post-PTG trial run.  That is as far as I >> took it.  Anyone who wants to try can pick out the local.conf I posted >> [0] >> >> dt >> >> [0] http://paste.openstack.org/show/753844/ >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maria.g.perez.ibarra at intel.com Thu Jul 11 22:01:57 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 11 Jul 2019 22:01:57 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 7/11/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190705T013000Z" from (link) ---------------------------------------------------------------------- Overall Results: Total = 422 Pass = 236 Fail = 17 Blocked = 45 Not Run = 124 Total executed = 298 Pass Rate = 93.28% Formula used : Pass Rate = pass * 100 / (pass + fail) ---------------------------------------------------------------------- Results per Domain: Regression - AIO-SX 23 PASS | 1 FAIL | 2 BLOCKED Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM Regression - HA 2 4 PASS | 1 FAIL Regression - Heat 12 PASS | 1 BLOCKED Regression - Horizon 4 PASS Regression - Install and Config 5 PASS Regression - Maintenance 5 PASS | 1 FAIL Regression - Networking 91 PASS | 10 FAIL | 28 BLOCKED Regression - Nova 2 PASS | Regression - Security 34 PASS | 2 FAIL | 6 BLOCKED Regression - Storage Regression - Inventory 29 PASS | 1 FAIL System Test 12 PASS | 1 FAIL | 8 BLOCKED --------------------------------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Cannot create instances with SRIOV port https://bugs.launchpad.net/starlingx/+bug/1835318 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 virsh only listing one volume, even though there was an additional volume attached after instantiation https://bugs.launchpad.net/starlingx/+bug/1834194 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 Device UUID is missing when boot up VM with block device https://bugs.launchpad.net/starlingx/+bug/1835282 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 ceph-mgr restful plugin error preventing platform-integ-apps from auto applying https://bugs.launchpad.net/starlingx/+bug/1835938 Instance created with a flat network spawns in error state https://bugs.launchpad.net/starlingx/+bug/1835965 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 compute reboot loop pci-irq-affinity-agent process was failing https://bugs.launchpad.net/starlingx/+bug/1836240 Total Bugs: 15 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Jul 11 22:07:40 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 11 Jul 2019 22:07:40 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 07/11 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1547919@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Team Meeting Agenda/Notes - Jul 10/2019 Bugs - stx.2.0 gating / new - Count: 16 - https://bugs.launchpad.net/starlingx/+bug/1829403 - Austin, this is a huge page allocation issue, removed the stx.networking tag - https://bugs.launchpad.net/starlingx/+bug/1835318 - Chenjie, the bug has been reproduced and still needs to investigate. - https://bugs.launchpad.net/starlingx/+bug/1835807 - Kevin, need info from the reporter >> Doesn't seem to be a VIM issue; neutron pod didn't recover. Need to re-assign this. - https://bugs.launchpad.net/starlingx/+bug/1817593 - Teresa, proposal made by Matt. need to implement. Can potentially re-gate given this is for a very specific config; can be worked around by using an alternative config. - https://bugs.launchpad.net/starlingx/+bug/1818118 - Joseph, starting investigation next week - https://bugs.launchpad.net/starlingx/+bug/1822366 - Nobody, not being worked. specific to Niantic NIC >> This maybe related to an stx specific nova patch which is not available in upstream nova - https://bugs.launchpad.net/starlingx/+bug/1822396 - Joseph, startling investigation next week - https://bugs.launchpad.net/starlingx/+bug/1832047 - Cheng, was not able to reproduce the issue. wonder if I can get access to the bug reporter's env? >> Seems to be reproducible by someone in Cindy's team as well as the Intel Mexico team. Cheng will follow up with them first. - https://bugs.launchpad.net/starlingx/+bug/1832697 - Marvin, Local test passed. I will commit to gerrit tommow >> code review in progress - https://bugs.launchpad.net/starlingx/+bug/1832892 - Steve, on vacation, will investigate in Aug - https://bugs.launchpad.net/starlingx/+bug/1834234 - Nobody, not being worked. Need to re-assign - https://bugs.launchpad.net/starlingx/+bug/1834556 - Marvin, didn`t reproduce the bug in AIO-SX VM environment, need info from the reporter >> Ghada to follow up with Litao. - https://bugs.launchpad.net/starlingx/+bug/1830082 - Teresa, lower priority. Can potentially re-gate given a configuration step was missed in this scenario. - https://bugs.launchpad.net/starlingx/+bug/1835965 - Chenjie, the steps seem wrong and needs bug reporter to retest based on the provided steps. >> Elio will follow up with Paulina to re-test with the new steps - https://bugs.launchpad.net/starlingx/+bug/1835300 - Krishna (reporter), need info from the reporter >> Reporter confirmed that ovs-dpdk is used here. Emulated devices are not supported for ovs-dpdk. Ghada to close. - https://bugs.launchpad.net/starlingx/+bug/1830286 - Elio, no info since June 5. >> Intel test team is still investigating the switch configs for this system Networking Test Status - Containerized OVS - Testing complete -- 92% Pass Rate - One bug: https://bugs.launchpad.net/starlingx/+bug/1835965 which maybe an issue with the steps; waiting for re-test - Networking Regression - Tracker: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 - Total: 144 / Pass: 90 / Fail: 10 / Not Run: 15 / Blocked: 28 - Blocked Test-cases: - TCs related to networking agent alarms and data connectivity alarms >> these will be obsoleted - TCs missing steps >> ChrisW to provide some input today - SR-IOV - IPv6 - Need infrastructure configuration changes to allow the use of IPv6 OAM addresses and connectivity to the registry - Targeting a basic configuration with about 10 TCs stx.3.0 - OVS-DPDK - Prime: Cheng - Cheng is working on the spec for the OVS-DPDK Containerization. Matt is reviewing. - TSN - Prime: Huifeng - Story: https://storyboard.openstack.org/#!/story/2005516 - spec has been reviewed: https://review.opendev.org/#/c/666768/ should be ready to merge. >> Ghada to ask Ian to merge on the TSC -- The process is to get a majority of TSC members to review, so I've asked Saul. Need to ask Dean to review as well once he's back from vacation next week. - Working on enabling TSN in VM - Using Ubuntu 18.04 - Question from Matt: what test head/application will be used for the validation? what aspects of TSN will be validated? Matt to follow up by email. From maria.g.perez.ibarra at intel.com Thu Jul 11 22:10:36 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 11 Jul 2019 22:10:36 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190711 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-11 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Jul 11 22:29:52 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 11 Jul 2019 22:29:52 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - July 11/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0C154798E@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release meeting agenda / notes July 11 2019 stx.2.0 - Feature Exception Status - Code Removal Stories: most code is in. 2 reviews left to merge. - K8s API Auth: The proposal for this is to abandon this. The approach to use keystone as the API backend is not working and doesn't have much support by the upstream community. Need to close on proposal with containers PL/TL. - Multiple cinder storage tiers: Get update from Frank once he's back from vacation - Feature Testing Status - Tracker: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1717644237 - Container - Ada - 75/54/1/5 - 93% progress - Numan - 82/58/3/0 - 74% progress - We will close feature testing with this numbers, and will include tests for ironic and nova override in the regression cycle. - OpenStack patch elimination - Ada - 48/38/0/0 -100% progress, 10 tc deferred - Numan - 8/4/4/0 - 100% progress - CentOS 7.6 - QAT - 12/10/0/0 - 100% progress. 1 tc deferred, 1 obsolete - Containerized OVS - 15/13/1/1 - 100% progress. - Ceph upgrade - 26/24/0/1 - 100% progress. 1 tc deferred. - Regression Testing Status - Tracker: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 - Running with ISO 20190705T013000Z - Total / Pass / Fail / Blocked = 422 / 235 / 17 / 45 - Pass rate: 93.2% - Raw calculation -- regardless of whether the reported bugs are deemed gating or not) - Can calculate an Adjusted Pass rate that removes the low priority / not gating bugs at the end of the release - Re-test of fixed bugs will occur in the second round of testing - Test-case First Pass: For Intel TCs, expected to complete by July 12. For WR TCs, will get update from Numan - RC1 -- Patch Checkin process - r/stx.2.0 branch will be created at RC1 - Any bug that is "gating" stx.2.0 will get merged in master first. Then cherrypicked by the developer to the release branch. - Any other features / enhancements go in master only. - For the r/stx.20 branch, only gating items would go in. Nothing else. - Specific CENGN builds will be setup for the r/stx.2.0 - All stx.2.0 test activities will be moved to the release branch - Sanity will continue on both branches to ensure master remains sane - Bugs - Gating criteria -- severity & priority is under discussion in the community - Starting with a bug scrub for the distro.openstack domain next week. Will see how that goes and will look to extend to other areas stx.3.0 - Initial feature list w/ Project Leads available on TSC etherpad - https://etherpad.openstack.org/p/stx-cores -- See "R3 Planning" under the 7/11 meeting minutes - Agreed with TSC that this initial list is sufficient to declare stx.3.0 milestone-1 next week - 7/17 - Action: Ghada to send a note to the community meeting - Action: Ghada to copy the current feature list to the stx.3.0 planning spreadsheet for the PLs to start working on their plans - Using Blueprints for backlog management is still an open action. Discuss in next meeting. - Do we start using this for new backlog items/features? Or do we need to enter features that were already approved for stx.3.0? From austin.sun at intel.com Fri Jul 12 01:16:24 2019 From: austin.sun at intel.com (Sun, Austin) Date: Fri, 12 Jul 2019 01:16:24 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC15029D8@ALA-MBD.corp.ad.wrs.com> References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> <5c77d8a6-de6f-0b3a-10bd-9edb0ece08eb@linux.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC15029D8@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Penny: Thanks a lot your info. Story [1] is using to track python2to3 for stx.3.0 . Task 35794 was created for upgrade requests-toolbelt. Task 35795 for replacing rpm_python and Task 35796 for replacing python-smartpm [1] https://storyboard.openstack.org/#!/story/2006158 Thank BR Austin Sun. -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Friday, July 12, 2019 5:12 AM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 I think I can use this module in place of the rpm one: https://pypi.org/project/version_utils/ It looks like this provides an equivalent to rpm.labelCompare that should allow me to drop rpm-python. -----Original Message----- From: Penney, Don Sent: Thursday, July 11, 2019 3:50 PM To: 'Saul Wold'; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? I can also look at the current use of the rpm module in patching and look for alternatives. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, July 11, 2019 3:40 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 On 7/10/19 7:03 AM, Sun, Austin wrote: > Hi All: > The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 > Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. > As sync in non-OpenStack distro meeting. > We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . > So below 6 packages are coming 3rd party which might be not python2to3 compliance. > > Package | who is using > openvswitch | ovs > python-cephfs | ceph > python-smartpm | standalone package > qemu-kvm-ev | mtce-compute > requests-toolbelt | cgcs-patch-controller > rpm-python | cgcs-patch-controller Can you identify replacement python3 packages for any of these. I know we found out that smartpm is used for the patch process, I know that smartpm is also an older project that does not have any upstream support any further, so that will require a fair amount of work. Sau! > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Sun, Austin [mailto:austin.sun at intel.com] > Sent: Wednesday, July 10, 2019 4:03 PM > To: Xie, Cindy ; Hu, Yong ; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > Hi All: > New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. > There are 11 packages not from centos but including python and may not be compatiable python2 and python3. > Package | who is using > openvswitch | ovs > python-aniso8601 | keystone > python-cephclient | ceph > python-cephfs | ceph > python-django-bash-completion | sysinv > python-smartpm | standalone package > python-unittest2 | sysinv > python-XStatic-jquery-ui | stx-gui > qemu-kvm-ev | mtce-compute > requests-toolbelt | cgcs-patch-controller > rpm-python | cgcs-patch-controller > > > I will continue check those 11 packages . > > [1] https://bugs.launchpad.net/starlingx/+bug/1808073 > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Sun, Austin > Sent: Thursday, July 4, 2019 11:43 AM > To: Xie, Cindy ; Hu, Yong ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > Hi Cindy: > Yes. we will do it and update sheet. > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Xie, Cindy [mailto:cindy.xie at intel.com] > Sent: Thursday, July 4, 2019 11:37 AM > To: Hu, Yong ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > Austin, > Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: > > In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. > > Thanks. - cindy > > -----Original Message----- > From: Yong Hu [mailto:yong.hu at intel.com] > Sent: Thursday, July 4, 2019 11:25 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? > > In my view the best solution is to wait for CentOS 8.0 :-) > > > On 03/07/2019 2:55 PM, Dean Troyer wrote: >> On 7/3/19 4:07 PM, Saul Wold wrote: >>> The current proposal seems to be to completely convert the base >>> CentOS7.6 system level python to use python3, this carries a high >>> risk factor as changing out all system-level python code could have a >>> cascade effect on system functionality and additional dependencies. >>> While >> >> Changing the distro/system Python version out from under the rest of >> the distro seems like an enormous time sink, much less a significant >> reliability risk. >> >>> A better solution would be to build python3 and the associated >>> requirements from the existing RHEL EPEL (Extra Packages for >>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>> This version correctly installs in a segregated directory tree. >> >> We would probably want to run a significant subset of the upstream >> OpenStack testing on this combination as it is not (AFAIK) tested there. >>  But this is true of any runtime + distro combination that is not in >> the fairly short list of combinations that upstream OpenStack actively >> tests. >> >>> Another option would be to delay the actual python2 conversion to >>> StarlingX 4.0, the OpenStack Train release will still support python2. >> >> One downside to this is it leaves us no margin to defer the change >> again, this is our second chance as it were.  OpenStack U (as of now) >> is likely to drop py2 support as a guarantee across-the-board. >> >>> There is still work that is needed beyond the conversion of the >>> python code itself to things like RPM specfiles data and other source >>> code (such as, C code that has #includes of python2.7). It's not >>> clear to me how much functional testing with python3 has occurred for >>> the flock beyond what Dean has started with devstack. >> >> I managed to get the fault services running on py3, sysinv fell over >> during the dbsync in my quick post-PTG trial run.  That is as far as I >> took it.  Anyone who wants to try can pick out the local.conf I posted >> [0] >> >> dt >> >> [0] http://paste.openstack.org/show/753844/ >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Fri Jul 12 02:22:05 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 11 Jul 2019 19:22:05 -0700 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> <5c77d8a6-de6f-0b3a-10bd-9edb0ece08eb@linux.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC15029D8@ALA-MBD.corp.ad.wrs.com> Message-ID: On 7/11/19 6:16 PM, Sun, Austin wrote: > Hi Penny: > Thanks a lot your info. > Story [1] is using to track python2to3 for stx.3.0 . > Task 35794 was created for upgrade requests-toolbelt. > Task 35795 for replacing rpm_python and Task 35796 for replacing python-smartpm replacing python-smartpm probably need a story on it's own, it will completely change the patch update process. Sau! > > [1] https://storyboard.openstack.org/#!/story/2006158 > > Thank > BR > Austin Sun. > > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: Friday, July 12, 2019 5:12 AM > To: Saul Wold ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > I think I can use this module in place of the rpm one: > https://pypi.org/project/version_utils/ > > It looks like this provides an equivalent to rpm.labelCompare that should allow me to drop rpm-python. > > > -----Original Message----- > From: Penney, Don > Sent: Thursday, July 11, 2019 3:50 PM > To: 'Saul Wold'; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. > > We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? > > I can also look at the current use of the rpm module in patching and look for alternatives. > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, July 11, 2019 3:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > > > On 7/10/19 7:03 AM, Sun, Austin wrote: >> Hi All: >> The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 >> Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. >> As sync in non-OpenStack distro meeting. >> We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . >> So below 6 packages are coming 3rd party which might be not python2to3 compliance. >> >> Package | who is using >> openvswitch | ovs >> python-cephfs | ceph >> python-smartpm | standalone package >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller > > Can you identify replacement python3 packages for any of these. > > I know we found out that smartpm is used for the patch process, I know > that smartpm is also an older project that does not have any upstream > support any further, so that will require a fair amount of work. > > Sau! > >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin [mailto:austin.sun at intel.com] >> Sent: Wednesday, July 10, 2019 4:03 PM >> To: Xie, Cindy ; Hu, Yong ; 'starlingx-discuss at lists.starlingx.io' >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Hi All: >> New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. >> There are 11 packages not from centos but including python and may not be compatiable python2 and python3. >> Package | who is using >> openvswitch | ovs >> python-aniso8601 | keystone >> python-cephclient | ceph >> python-cephfs | ceph >> python-django-bash-completion | sysinv >> python-smartpm | standalone package >> python-unittest2 | sysinv >> python-XStatic-jquery-ui | stx-gui >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller >> >> >> I will continue check those 11 packages . >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1808073 >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin >> Sent: Thursday, July 4, 2019 11:43 AM >> To: Xie, Cindy ; Hu, Yong ; starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] Python2 -> Python3 >> >> Hi Cindy: >> Yes. we will do it and update sheet. >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Xie, Cindy [mailto:cindy.xie at intel.com] >> Sent: Thursday, July 4, 2019 11:37 AM >> To: Hu, Yong ; starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Austin, >> Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: >> >> In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. >> >> Thanks. - cindy >> >> -----Original Message----- >> From: Yong Hu [mailto:yong.hu at intel.com] >> Sent: Thursday, July 4, 2019 11:25 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? >> >> In my view the best solution is to wait for CentOS 8.0 :-) >> >> >> On 03/07/2019 2:55 PM, Dean Troyer wrote: >>> On 7/3/19 4:07 PM, Saul Wold wrote: >>>> The current proposal seems to be to completely convert the base >>>> CentOS7.6 system level python to use python3, this carries a high >>>> risk factor as changing out all system-level python code could have a >>>> cascade effect on system functionality and additional dependencies. >>>> While >>> >>> Changing the distro/system Python version out from under the rest of >>> the distro seems like an enormous time sink, much less a significant >>> reliability risk. >>> >>>> A better solution would be to build python3 and the associated >>>> requirements from the existing RHEL EPEL (Extra Packages for >>>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>>> This version correctly installs in a segregated directory tree. >>> >>> We would probably want to run a significant subset of the upstream >>> OpenStack testing on this combination as it is not (AFAIK) tested there. >>>  But this is true of any runtime + distro combination that is not in >>> the fairly short list of combinations that upstream OpenStack actively >>> tests. >>> >>>> Another option would be to delay the actual python2 conversion to >>>> StarlingX 4.0, the OpenStack Train release will still support python2. >>> >>> One downside to this is it leaves us no margin to defer the change >>> again, this is our second chance as it were.  OpenStack U (as of now) >>> is likely to drop py2 support as a guarantee across-the-board. >>> >>>> There is still work that is needed beyond the conversion of the >>>> python code itself to things like RPM specfiles data and other source >>>> code (such as, C code that has #includes of python2.7). It's not >>>> clear to me how much functional testing with python3 has occurred for >>>> the flock beyond what Dean has started with devstack. >>> >>> I managed to get the fault services running on py3, sysinv fell over >>> during the dbsync in my quick post-PTG trial run.  That is as far as I >>> took it.  Anyone who wants to try can pick out the local.conf I posted >>> [0] >>> >>> dt >>> >>> [0] http://paste.openstack.org/show/753844/ >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From yong.hu at intel.com Tue Jul 9 13:48:47 2019 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 9 Jul 2019 13:48:47 +0000 Subject: [Starlingx-discuss] WW28 (July 9th) - Meeting notes from stx.distro.openstack team Message-ID: https://etherpad.openstack.org/p/stx-distro-openstack-meetings 7/9 meeting · Yong was running this meeting on behalf of Dean (Dean is on vacation) · Nova placement helm chart https://review.opendev.org/#/c/662229/ - still pending * Zhipeng - addressed the comments from Chris Dent on July 4th * Will join the IRC-meeting of OpenStack-Helm on Tuesday (July 9th US time). · Final helm override status (Gerry) - expect all reviews out by WW27. Have we done it? – * @Bill to follow up with Gerry on the status. · Orphan instance cleanup: https://review.openstack.org/#/c/627765/ patch set went up to 31 * new patch set finished the new code (relatively smaller scale of changes) based on comments from Sean (+1 reviewer, but he is key reviewer with influence), unit test is still WIP. Has pinged Sean in IRC channel. · NUMA topology: https://review.openstack.org/#/c/621476/ patch set went to 46 * addressed some unit test failures, but this patch is actually in the queue of review (Run Queue). · Rebase to the new Nova branch: EB ready, dev-test (deployment testing) done, patch under review. * Zhipeng has made an EB and done the deployment testing by himself, sent to Ricardo for further verification on LP: https://bugs.launchpad.net/starlingx/+bug/1827692 * new nova branch (https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2): https://review.opendev.org/#/c/669053/ - patch under review, already got one +2. · Shuquan to update other patches' upstream progress owned by 99cloud. * @Boxiang help to follow up with Shuquan, like sending updates/progress via mail · Other HIGH gating issues: * https://bugs.launchpad.net/starlingx/+bug/1820882 - Boxiang Zhou (zhu.boxiang at 99cloud.net) to update? § Boxiang has tested this case[0] on the stx.2.0 tag recently and live migration still does not honour server group(anti-affinity) policy. § The issue[0] is duplicated to a nova upstream issue[1]. And I have post a patch to fix it. Now the patch is still in-review. § Next Step: continue to follow up my patch and try to ping some nova cores to review it. § [0] https://bugs.launchpad.net/starlingx/+bug/1820882 § [1] https://bugs.launchpad.net/nova/+bug/1821755 § § These 2 patches are important because 3~4 LPs are related to the fix. * https://bugs.launchpad.net/starlingx/+bug/1831130 - Chenjie Xu (Intel) to update: "Waiting on bug reporter to reproduce the bug on bare metals." Opens: · 1. do we have still minutes sent via mail? YES, and will include Yongli and JF in the loop. · 2. review medium LPs in next week with Ada, Bill, Ghada and Frank. Background: we currently have 3 HIGH LPs, which are under radar, and we have to review ALL gating medium issues to ensure important issues are handled with priority. Yong will send out the meeting request. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Wed Jul 10 07:17:39 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Wed, 10 Jul 2019 07:17:39 +0000 Subject: [Starlingx-discuss] StarlingX Bootimage.iso Queries Message-ID: Hi Team, I have tried installing StarlingX 2018.10 and 2019.05 on my Dell,HP,Supermicro Server/Machines, but facing the following issues: * If I create a bootable pen drive and try to install the image, it give me the following error dracut-initqueue[628]: Warning: dracut-initqueue timeout - starting timeout scripts dracut-initqueue[628]: Warning: dracut-initqueue timeout - starting timeout scripts dracut-initqueue[628]: Warning: dracut-initqueue timeout - starting timeout scripts And eventually enters into the Grub. dracut:/# _ However this issue is not being faced when I install the Bootimage.iso by creating a bootable DVD. The image depicting the issue is attached in the mail. Is it compulsory to install bootimage.iso using CD/DVD only? * The StarlingX is not able to detect USB to Ethernet NIC Adapter I have installed StarlingX in one of my machine which has only one on-board NIC. I have a USB to Ethernet available with me which is being detected on Windows/Linux. I have tested it with Windows Ubuntu Centos. But StarlingX is not able to detect it. It only detects the on-board NIC a machine has. Is it possible to detect USB to Ethernet NIC in StarlingX? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_20190513_092030.jpg Type: image/jpeg Size: 5645953 bytes Desc: IMG_20190513_092030.jpg URL: From zhipengs.liu at intel.com Fri Jul 12 05:41:30 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 12 Jul 2019 05:41:30 +0000 Subject: [Starlingx-discuss] About configfile patch for redfishtool In-Reply-To: <93814834B4855241994F290E959305C7530AD92E@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C7530AD92E@SHSMSX104.ccr.corp.intel.com> Message-ID: <93814834B4855241994F290E959305C7530AE284@SHSMSX104.ccr.corp.intel.com> Hi Eric and Saul, My patch has already been accepted by upstream and merged. https://github.com/DMTF/Redfishtool/pull/67 So, I believe we can remove this patch soon. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: 2019年7月8日 22:08 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] About configfile patch for redfishtool Hi all, Let's discuss this topic here. Your comments are welcome! Thanks! Zhipeng -----Original Message----- From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: 2019年7月8日 19:41 To: Saul Wold ; Liu, ZhipengS Cc: Hu, Yong ; Rowsell, Brent ; Eslimi, Dariush ; Khalil, Ghada ; Xie, Cindy Subject: RE: About configfile patch Saul, Very good points and suggestion. Thank you. Zhipeng, Can you put this out to the general starling-x discussion list as well as Redfish discussion list and keep us informed as to how the redfish community is reacting to the change request. Eric. > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Sunday, July 07, 2019 6:42 PM > To: Liu, ZhipengS; MacDonald, Eric > Cc: Hu, Yong > Subject: Re: About configfile patch > Importance: High > > > Hi Zihipeng, Eric: > > I would like to see this move to the general discuss list, I think > it's appropriate for everyone to understand what's going. Thanks for > getting the patch proposed to upstream Redfish. > > I am concerned first with the technical debt and making sure that the > Redfish upstream community is aware of what we are proposing / doing > in StarlingX. I had another look at this and I now have a better idea > of why it kept being a concern. > > 1) processing the config file itself inside of options processing is > not generally a good idea. It does not allow for easy parsing and extension > of the config file's contents. > > 2) I see you using json, this is good, thanks for proposing it to the > Redfish community, they might have an idea to use a different format > for the contents of the config file. This gets the json idea out there > now rather than finding out in 6 month they they decided to use a > different format. > > 3) As I have mentioned before having plain text passwords is never my > favorite way to go, but since we are already down that path with IPMI, > let's keep going, again maybe the RedFish community had thought about > this or this patch proposal will force that discussion. > > My sunday afternoon thoughts. > > Sau! > > > On 7/4/19 7:24 PM, Liu, ZhipengS wrote: > > +Saul and Yong, > > > > Hi Saul, > > > > Below email thread may give you some clarification about your concern. > > > > Zhipeng > > > > *From:* Liu, ZhipengS > > *Sent:* 2019年7月5日 10:20 > > *To:* 'MacDonald, Eric' > > *Subject:* RE: About configfile patch > > > > I can see password through > > > > ps  –n > > > > Thanks! > > > > Zhipeng > > > > *From:* Liu, ZhipengS > > *Sent:* 2019年7月5日 10:01 > > *To:* 'MacDonald, Eric' > > > > *Subject:* RE: About configfile patch > > > > Hi Eric, > > > > Thanks for your clarification! > > > > BTW, how to use process listing, could you give me an example? J > > > > Zhipeng > > > > *From:*MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > > *Sent:* 2019年7月4日20:26 > > *To:* Liu, ZhipengS > > > > *Subject:* RE: About configfile patch > > > > Hi Zhipeng, > > > > See below. > > > > Is Saul’s concern the technical debt of the config patch or the pw > > file in general. Seems the former. > > > > What can  do, should I speak with him ? > > > > Eric. > > > > *From:*Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > *Sent:* Thursday, July 04, 2019 4:30 AM > > *To:* MacDonald, Eric > > *Subject:* About configfile patch > > *Importance:* High > > > > Hi Eric, > > > > For configfile patch, Saul still have some concern about it and why > > we use a password file > > > > Anyway, I have submitted patch to upstream. > > > > https://github.com/DMTF/Redfishtool/pull/67 > > > > From code, I can see that MTC get bmc_pw through keyring. > > > > */[... Eric ] or barbican now, yes./* > > > > Then we pass the bmc_pw through extra_info to ipmi command thread > > > > */[... Eric ] Yes/* > > > > “The current implementation using IPMITOOL puts the BMC password > > into a short lived root privilege temp file so that it does not show > > up in a process listing.” > > > > Why we have to use temp file instead of showing up in a process > > listing?    The password can be got through process listing?  Not > > clear about this point. > > > > */[... Eric ] If we use the –P option when invoking ipmitool > > then while that command is active and someone does a process listing > > then they can see the –P on the process listing. This is a > > security issue because a non-root user can learn the BMC password > > for any host by just doing a process listing on the active > > controller./* > > > > Could you give me more detail information, thanks! > > > > From below code, it seems we have comment related code.  Does it > > means the file may not be removed right away even with the file open. > > > > So, still not sure which one is much more safe. > > > > */[... Eric ] The temp file is removed in the thread after execution > > completion or timeout./* > > > > */Example code taken from mtcThreads.cpp/* > > > > *//* > > > > *//* > > > > */There is also a garbage collection cleanup audit that ensures > > these temp files do not linger due to ‘say’ a process restart during > > command > > execution./* > > > > */[... Eric ] /**//* > > > > *//* > > > > * > > > > * TODO: fix or figure out why the unlink removes the file right away > > even > > > > *       with the file open. > > > > * > > > > ******************************************************************** > > *********/ > > > > */[... Eric ] The above comment was added simply because when I was > > coding I didn’t understand why the unlink removes the file right > > away./* > > > > */I think now that it was because the file was not open at the time > > the unlink was executed./* > > > > *//* > > > > */In any case the tmp pw file is still removed with redundancy./* > > > > int hostUtil_mktmpfile ( string hostname, string basename, string & > > filename, string data ) > > > > { > > > >     // buffer to hold the temporary file name > > > >     char tempBuff[MAX_FILENAME_LEN]; > > > >     int fd = -1; > > > >     memset(tempBuff,0,sizeof(tempBuff)); > > > >     if ( basename.empty() || data.empty() ) > > > >     { > > > >         slog ("%s called with one or more bad parameters > > (%d:%d)\n", > > > >                   hostname.c_str(), basename.empty(), > > data.empty()); > > > >         return (0); > > > >     } > > > >     /* add what mkstemp will make unique */ > > > >     basename.append("XXXXXX"); > > > >     // Copy the relevant information in the buffers > > > >     snprintf ( &tempBuff[0], MAX_FILENAME_LEN, "%s", > > basename.data()); > > > >     // Create the temporary file, this function will > > > >     // replace the 'X's with random letters > > > >     fd = mkstemp(tempBuff); > > > > // Call unlink so that whenever the file is closed or the program > > exits > > > >     // the temporary file is deleted. > > > >     // > > > >     // Note: Unlinking removes the file immediately. > > > >     // Commenting out. Caller must remove file. > > > >     // > > > >     // unlink(tempBuff); > > > > Thanks! > > > > Zhipeng > > > > *From:* MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > > *Sent:* 2019年7月2日 19:21 > > *To:* Liu, ZhipengS > > > > *Subject:* WolfPass Sensors > > > > Hi Zhipeng, > > > > I've been upgrading the firmware on our set of WolfPass servers. > > > > Even with the upgrade I’ve been having a hard time reading ther > > server sensors through redfish. > > > > Can you send me the command(s) you use and output you see for/when > > dumping the sensors on your WolfPass server ? > > > > I use the following commands on the supermicro but it seems that the > > wolfpass servers don't support this method. > > > > redfishtool -r -u -p Chassis > > Thermal > > > > redfishtool -r -u -p Chassis > > Power > > > > Here are the firmware versions I have. I wonder if it’s my SDR version. > > What is yours ? > > > > *WolfPass* > > > > > > > > *BMC FW* > > > > > > > > *ME* > > > > > > > > *SDR* > > > > > > > > *Redfish Version* > > > > WolfPass 1 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.340 > > > > > > > > 1.04 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 2 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.340 > > > > > > > > 1.04 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 3 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 4 > > > > > > > > 1.29.7d703f59 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > > No Redfish Support > > > > WolfPass 5 > > > > > > > > 1.29.7d703f59 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > > No Redfish Support > > > > WolfPass 6 > > > > > > > > 1.29.7d703f59 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > > No Redfish Support > > > > WolfPass 7 > > > > > > > > 1.29.7d703f59 > > > > > > > > 04.00.04.288 > > > > > > > > 1.29 > > > > > > > > No Redfish Support > > > > WolfPass 8 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 9 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 10 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 11 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 12 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.340 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 13 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 14 > > > > > > > > 1.93.870cf4f0 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 15 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.340 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 16 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > WolfPass 17 > > > > > > > > 1.43.660a4315 > > > > > > > > 04.00.04.294 > > > > > > > > 1.43 > > > > > > > >     "RedfishVersion": "1.1.0", > > > > Cheers, > > > > Eric MacDonald, MTS, Engineering, Wind River > > > > direct 613.963.1387  fax: 613.492.7870  skype: eric.r.macdonald > > > > 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 > > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From shuicheng.lin at intel.com Fri Jul 12 07:15:03 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 12 Jul 2019 07:15:03 +0000 Subject: [Starlingx-discuss] StarlingX Bootimage.iso Queries In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C76608B3FD1@SHSMSX105.ccr.corp.intel.com> Hi Gupta, For the 1st issue, please try to create the image in Linux system with cmd like below: sudo dd if=bootimage.iso of=/dev/sdb bs=1M && sync (sdb is the pen driver device) For the 2nd issue, StarlingX doesn't support USB Ethernet in default. You need modify kernel configuration to support it if you want. Also StarlingX uses CentOS 7.6 with 3.10 kernel, which is a much old kernel. So not all usb Ethernet driver could be supported. Also there is other code need be updated to support USB Ethernet. You could have a try with below 2 patches applied: https://review.opendev.org/657291 https://review.opendev.org/657313 It will be much easier if you could add a PCI Ethernet card in your machine. :) Best Regards Shuicheng From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Wednesday, July 10, 2019 3:18 PM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Bootimage.iso Queries Hi Team, I have tried installing StarlingX 2018.10 and 2019.05 on my Dell,HP,Supermicro Server/Machines, but facing the following issues: * If I create a bootable pen drive and try to install the image, it give me the following error dracut-initqueue[628]: Warning: dracut-initqueue timeout - starting timeout scripts dracut-initqueue[628]: Warning: dracut-initqueue timeout - starting timeout scripts dracut-initqueue[628]: Warning: dracut-initqueue timeout - starting timeout scripts And eventually enters into the Grub. dracut:/# _ However this issue is not being faced when I install the Bootimage.iso by creating a bootable DVD. The image depicting the issue is attached in the mail. Is it compulsory to install bootimage.iso using CD/DVD only? * The StarlingX is not able to detect USB to Ethernet NIC Adapter I have installed StarlingX in one of my machine which has only one on-board NIC. I have a USB to Ethernet available with me which is being detected on Windows/Linux. I have tested it with Windows Ubuntu Centos. But StarlingX is not able to detect it. It only detects the on-board NIC a machine has. Is it possible to detect USB to Ethernet NIC in StarlingX? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Fri Jul 12 12:40:28 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Fri, 12 Jul 2019 12:40:28 +0000 Subject: [Starlingx-discuss] About configfile patch for redfishtool In-Reply-To: <93814834B4855241994F290E959305C7530AE284@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C7530AD92E@SHSMSX104.ccr.corp.intel.com> <93814834B4855241994F290E959305C7530AE284@SHSMSX104.ccr.corp.intel.com> Message-ID: <210898B96CA058408C55992CCAD98676C1044D6E@ALA-MBD.corp.ad.wrs.com> Awesome !! > -----Original Message----- > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > Sent: Friday, July 12, 2019 1:41 AM > To: MacDonald, Eric; Saul Wold > Cc: starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] About configfile patch for redfishtool > Importance: High > > Hi Eric and Saul, > > My patch has already been accepted by upstream and merged. > https://github.com/DMTF/Redfishtool/pull/67 > So, I believe we can remove this patch soon. > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > Sent: 2019年7月8日 22:08 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] About configfile patch for redfishtool > > Hi all, > > Let's discuss this topic here. Your comments are welcome! > > Thanks! > Zhipeng > > -----Original Message----- > From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > Sent: 2019年7月8日 19:41 > To: Saul Wold ; Liu, ZhipengS > Cc: Hu, Yong ; Rowsell, Brent ; Eslimi, Dariush > ; Khalil, Ghada ; Xie, Cindy > > Subject: RE: About configfile patch > > Saul, > > Very good points and suggestion. Thank you. > > Zhipeng, > > Can you put this out to the general starling-x discussion list as well as Redfish discussion list and keep > us informed as to how the redfish community is reacting to the change request. > > Eric. > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Sunday, July 07, 2019 6:42 PM > > To: Liu, ZhipengS; MacDonald, Eric > > Cc: Hu, Yong > > Subject: Re: About configfile patch > > Importance: High > > > > > > Hi Zihipeng, Eric: > > > > I would like to see this move to the general discuss list, I think > > it's appropriate for everyone to understand what's going. Thanks for > > getting the patch proposed to upstream Redfish. > > > > I am concerned first with the technical debt and making sure that the > > Redfish upstream community is aware of what we are proposing / doing > > in StarlingX. I had another look at this and I now have a better idea > > of why it kept being a concern. > > > > 1) processing the config file itself inside of options processing is > > not generally a good idea. It does not allow for easy parsing and extension > > of the config file's contents. > > > > 2) I see you using json, this is good, thanks for proposing it to the > > Redfish community, they might have an idea to use a different format > > for the contents of the config file. This gets the json idea out there > > now rather than finding out in 6 month they they decided to use a > > different format. > > > > 3) As I have mentioned before having plain text passwords is never my > > favorite way to go, but since we are already down that path with IPMI, > > let's keep going, again maybe the RedFish community had thought about > > this or this patch proposal will force that discussion. > > > > My sunday afternoon thoughts. > > > > Sau! > > > > > > On 7/4/19 7:24 PM, Liu, ZhipengS wrote: > > > +Saul and Yong, > > > > > > Hi Saul, > > > > > > Below email thread may give you some clarification about your concern. > > > > > > Zhipeng > > > > > > *From:* Liu, ZhipengS > > > *Sent:* 2019年7月5日 10:20 > > > *To:* 'MacDonald, Eric' > > > *Subject:* RE: About configfile patch > > > > > > I can see password through > > > > > > ps  –n > > > > > > Thanks! > > > > > > Zhipeng > > > > > > *From:* Liu, ZhipengS > > > *Sent:* 2019年7月5日 10:01 > > > *To:* 'MacDonald, Eric' > > > > > > *Subject:* RE: About configfile patch > > > > > > Hi Eric, > > > > > > Thanks for your clarification! > > > > > > BTW, how to use process listing, could you give me an example? J > > > > > > Zhipeng > > > > > > *From:*MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > > > *Sent:* 2019年7月4日20:26 > > > *To:* Liu, ZhipengS > > > > > > *Subject:* RE: About configfile patch > > > > > > Hi Zhipeng, > > > > > > See below. > > > > > > Is Saul’s concern the technical debt of the config patch or the pw > > > file in general. Seems the former. > > > > > > What can  do, should I speak with him ? > > > > > > Eric. > > > > > > *From:*Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > > *Sent:* Thursday, July 04, 2019 4:30 AM > > > *To:* MacDonald, Eric > > > *Subject:* About configfile patch > > > *Importance:* High > > > > > > Hi Eric, > > > > > > For configfile patch, Saul still have some concern about it and why > > > we use a password file > > > > > > Anyway, I have submitted patch to upstream. > > > > > > https://github.com/DMTF/Redfishtool/pull/67 > > > > > > From code, I can see that MTC get bmc_pw through keyring. > > > > > > */[... Eric ] or barbican now, yes./* > > > > > > Then we pass the bmc_pw through extra_info to ipmi command thread > > > > > > */[... Eric ] Yes/* > > > > > > “The current implementation using IPMITOOL puts the BMC password > > > into a short lived root privilege temp file so that it does not show > > > up in a process listing.” > > > > > > Why we have to use temp file instead of showing up in a process > > > listing?    The password can be got through process listing?  Not > > > clear about this point. > > > > > > */[... Eric ] If we use the –P option when invoking ipmitool > > > then while that command is active and someone does a process listing > > > then they can see the –P on the process listing. This is a > > > security issue because a non-root user can learn the BMC password > > > for any host by just doing a process listing on the active > > > controller./* > > > > > > Could you give me more detail information, thanks! > > > > > > From below code, it seems we have comment related code.  Does it > > > means the file may not be removed right away even with the file open. > > > > > > So, still not sure which one is much more safe. > > > > > > */[... Eric ] The temp file is removed in the thread after execution > > > completion or timeout./* > > > > > > */Example code taken from mtcThreads.cpp/* > > > > > > *//* > > > > > > *//* > > > > > > */There is also a garbage collection cleanup audit that ensures > > > these temp files do not linger due to ‘say’ a process restart during > > > command > > > execution./* > > > > > > */[... Eric ] /**//* > > > > > > *//* > > > > > > * > > > > > > * TODO: fix or figure out why the unlink removes the file right away > > > even > > > > > > *       with the file open. > > > > > > * > > > > > > ******************************************************************** > > > *********/ > > > > > > */[... Eric ] The above comment was added simply because when I was > > > coding I didn’t understand why the unlink removes the file right > > > away./* > > > > > > */I think now that it was because the file was not open at the time > > > the unlink was executed./* > > > > > > *//* > > > > > > */In any case the tmp pw file is still removed with redundancy./* > > > > > > int hostUtil_mktmpfile ( string hostname, string basename, string & > > > filename, string data ) > > > > > > { > > > > > >     // buffer to hold the temporary file name > > > > > >     char tempBuff[MAX_FILENAME_LEN]; > > > > > >     int fd = -1; > > > > > >     memset(tempBuff,0,sizeof(tempBuff)); > > > > > >     if ( basename.empty() || data.empty() ) > > > > > >     { > > > > > >         slog ("%s called with one or more bad parameters > > > (%d:%d)\n", > > > > > >                   hostname.c_str(), basename.empty(), > > > data.empty()); > > > > > >         return (0); > > > > > >     } > > > > > >     /* add what mkstemp will make unique */ > > > > > >     basename.append("XXXXXX"); > > > > > >     // Copy the relevant information in the buffers > > > > > >     snprintf ( &tempBuff[0], MAX_FILENAME_LEN, "%s", > > > basename.data()); > > > > > >     // Create the temporary file, this function will > > > > > >     // replace the 'X's with random letters > > > > > >     fd = mkstemp(tempBuff); > > > > > > // Call unlink so that whenever the file is closed or the program > > > exits > > > > > >     // the temporary file is deleted. > > > > > >     // > > > > > >     // Note: Unlinking removes the file immediately. > > > > > >     // Commenting out. Caller must remove file. > > > > > >     // > > > > > >     // unlink(tempBuff); > > > > > > Thanks! > > > > > > Zhipeng > > > > > > *From:* MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > > > *Sent:* 2019年7月2日 19:21 > > > *To:* Liu, ZhipengS > > > > > > *Subject:* WolfPass Sensors > > > > > > Hi Zhipeng, > > > > > > I've been upgrading the firmware on our set of WolfPass servers. > > > > > > Even with the upgrade I’ve been having a hard time reading ther > > > server sensors through redfish. > > > > > > Can you send me the command(s) you use and output you see for/when > > > dumping the sensors on your WolfPass server ? > > > > > > I use the following commands on the supermicro but it seems that the > > > wolfpass servers don't support this method. > > > > > > redfishtool -r -u -p Chassis > > > Thermal > > > > > > redfishtool -r -u -p Chassis > > > Power > > > > > > Here are the firmware versions I have. I wonder if it’s my SDR version. > > > What is yours ? > > > > > > *WolfPass* > > > > > > > > > > > > *BMC FW* > > > > > > > > > > > > *ME* > > > > > > > > > > > > *SDR* > > > > > > > > > > > > *Redfish Version* > > > > > > WolfPass 1 > > > > > > > > > > > > 1.93.870cf4f0 > > > > > > > > > > > > 04.00.04.340 > > > > > > > > > > > > 1.04 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 2 > > > > > > > > > > > > 1.93.870cf4f0 > > > > > > > > > > > > 04.00.04.340 > > > > > > > > > > > > 1.04 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 3 > > > > > > > > > > > > 1.93.870cf4f0 > > > > > > > > > > > > 04.00.04.288 > > > > > > > > > > > > 1.29 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 4 > > > > > > > > > > > > 1.29.7d703f59 > > > > > > > > > > > > 04.00.04.288 > > > > > > > > > > > > 1.29 > > > > > > > > > > > > No Redfish Support > > > > > > WolfPass 5 > > > > > > > > > > > > 1.29.7d703f59 > > > > > > > > > > > > 04.00.04.288 > > > > > > > > > > > > 1.29 > > > > > > > > > > > > No Redfish Support > > > > > > WolfPass 6 > > > > > > > > > > > > 1.29.7d703f59 > > > > > > > > > > > > 04.00.04.288 > > > > > > > > > > > > 1.29 > > > > > > > > > > > > No Redfish Support > > > > > > WolfPass 7 > > > > > > > > > > > > 1.29.7d703f59 > > > > > > > > > > > > 04.00.04.288 > > > > > > > > > > > > 1.29 > > > > > > > > > > > > No Redfish Support > > > > > > WolfPass 8 > > > > > > > > > > > > 1.43.660a4315 > > > > > > > > > > > > 04.00.04.294 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 9 > > > > > > > > > > > > 1.43.660a4315 > > > > > > > > > > > > 04.00.04.294 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 10 > > > > > > > > > > > > 1.43.660a4315 > > > > > > > > > > > > 04.00.04.294 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 11 > > > > > > > > > > > > 1.43.660a4315 > > > > > > > > > > > > 04.00.04.294 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 12 > > > > > > > > > > > > 1.43.660a4315 > > > > > > > > > > > > 04.00.04.340 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 13 > > > > > > > > > > > > 1.93.870cf4f0 > > > > > > > > > > > > 04.00.04.294 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 14 > > > > > > > > > > > > 1.93.870cf4f0 > > > > > > > > > > > > 04.00.04.294 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 15 > > > > > > > > > > > > 1.43.660a4315 > > > > > > > > > > > > 04.00.04.340 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 16 > > > > > > > > > > > > 1.43.660a4315 > > > > > > > > > > > > 04.00.04.294 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > WolfPass 17 > > > > > > > > > > > > 1.43.660a4315 > > > > > > > > > > > > 04.00.04.294 > > > > > > > > > > > > 1.43 > > > > > > > > > > > >     "RedfishVersion": "1.1.0", > > > > > > Cheers, > > > > > > Eric MacDonald, MTS, Engineering, Wind River > > > > > > direct 613.963.1387  fax: 613.492.7870  skype: eric.r.macdonald > > > > > > 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ian.Jolliffe at windriver.com Fri Jul 12 14:22:26 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Fri, 12 Jul 2019 14:22:26 +0000 Subject: [Starlingx-discuss] [TSC] July 11th minutes Message-ID: 7/11 meeting ============ We started with a good discussion on specs out for review – [0] Specs - what to do with implemented folder - recommendation when R2 items are implemented, move to implemented once completed. Looking forward - do we want to rename the folders to release centric or align with rest of project nomenclature? Looking for feedback R3 Planning - captures current view of R3 planning - the team felt that this was sufficient for the initial proposed candidate for the upcoming milestone. Release team to review. In == Distributed Cloud (deferred from R2) Lead: Dariush Backup and Restore (deferred from R2) Lead: Frank Containerized Openstack clients (deferred from R2) Lead: Frank Fault Containerization (deferred from R2) Lead: Frank Up version Kubernetes and dependencies (docker, calico, helm etc.) Lead: Frank Up version Openstack to Train Lead: Bruce Intel GPU support for k8s Lead: Cindy Intel QAT support for k8s Lead: Cindy IPv6 support for PXE boot network Lead: TSN support in VM, https://review.opendev.org/655833 Lead: Forrest R2->R3 upgrade Lead: Dariush This may require a dot release Redfish support, https://review.opendev.org/#/c/668300/ Lead: Dariush Infrastructure and cluster monitoring, https://review.opendev.org/#/c/665208/ Lead:Dariush Revist - email thread need input prior to TSC meeting Python 2->3 cutover Lead: Cindy - Saul raised concerns with the amount of churn this will introduce. See email on ML for more details. We will discuss next week. Under Review ============ Containerize CEPH, https://review.opendev.org/656371 Lead: Vivian Containerize OVS-DPDK, https://review.opendev.org/655830 Lead: Forrest Prep/Ongoing Multi-OS support Lead: Abraham/Saul Intel FPGA support for k8s prep Lead: Ghada Sysvinit -> systemd conversion/cleanup Lead: Saul To Review ========= [brucej] Please review the following features that our team plans to work on: 1) IA platform feature enablement - defer Is this strictly a validation exercise or adding additional kernel enablement ? If it requires new kernel functionality what is the strategy to get that into StarlingX 2) Performance testing - Could we review this in next TSC meeting? Can we get some more detail around this ? 3) IOT device management - Could we review this in next TSC meeting? Ditto Separate disto build from flock build Flock Versioning Spec - In progress Use PBR - OpenStack's SemVer process From sgw at linux.intel.com Fri Jul 12 15:13:09 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 12 Jul 2019 08:13:09 -0700 Subject: [Starlingx-discuss] About configfile patch for redfishtool In-Reply-To: <210898B96CA058408C55992CCAD98676C1044D6E@ALA-MBD.corp.ad.wrs.com> References: <93814834B4855241994F290E959305C7530AD92E@SHSMSX104.ccr.corp.intel.com> <93814834B4855241994F290E959305C7530AE284@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676C1044D6E@ALA-MBD.corp.ad.wrs.com> Message-ID: <131caf10-0a13-f66e-8e08-e9a63b478aab@linux.intel.com> +1, thanks for following up with Upstream, I am glad they accepted this. If you plan on extending the configuration file usage, I would recommend trying to push it upstream first. I would also suggest re-factoring where and how the config file is parsed, that is outside of the options parsing, not changing the format. Sau! On 7/12/19 5:40 AM, MacDonald, Eric wrote: > Awesome !! > >> -----Original Message----- >> From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] >> Sent: Friday, July 12, 2019 1:41 AM >> To: MacDonald, Eric; Saul Wold >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] About configfile patch for redfishtool >> Importance: High >> >> Hi Eric and Saul, >> >> My patch has already been accepted by upstream and merged. >> https://github.com/DMTF/Redfishtool/pull/67 >> So, I believe we can remove this patch soon. >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] >> Sent: 2019年7月8日 22:08 >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] About configfile patch for redfishtool >> >> Hi all, >> >> Let's discuss this topic here. Your comments are welcome! >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] >> Sent: 2019年7月8日 19:41 >> To: Saul Wold ; Liu, ZhipengS >> Cc: Hu, Yong ; Rowsell, Brent ; Eslimi, Dariush >> ; Khalil, Ghada ; Xie, Cindy >> >> Subject: RE: About configfile patch >> >> Saul, >> >> Very good points and suggestion. Thank you. >> >> Zhipeng, >> >> Can you put this out to the general starling-x discussion list as well as Redfish discussion list and keep >> us informed as to how the redfish community is reacting to the change request. >> >> Eric. >> >>> -----Original Message----- >>> From: Saul Wold [mailto:sgw at linux.intel.com] >>> Sent: Sunday, July 07, 2019 6:42 PM >>> To: Liu, ZhipengS; MacDonald, Eric >>> Cc: Hu, Yong >>> Subject: Re: About configfile patch >>> Importance: High >>> >>> >>> Hi Zihipeng, Eric: >>> >>> I would like to see this move to the general discuss list, I think >>> it's appropriate for everyone to understand what's going. Thanks for >>> getting the patch proposed to upstream Redfish. >>> >>> I am concerned first with the technical debt and making sure that the >>> Redfish upstream community is aware of what we are proposing / doing >>> in StarlingX. I had another look at this and I now have a better idea >>> of why it kept being a concern. >>> >>> 1) processing the config file itself inside of options processing is >>> not generally a good idea. It does not allow for easy parsing and extension >>> of the config file's contents. >>> >>> 2) I see you using json, this is good, thanks for proposing it to the >>> Redfish community, they might have an idea to use a different format >>> for the contents of the config file. This gets the json idea out there >>> now rather than finding out in 6 month they they decided to use a >>> different format. >>> >>> 3) As I have mentioned before having plain text passwords is never my >>> favorite way to go, but since we are already down that path with IPMI, >>> let's keep going, again maybe the RedFish community had thought about >>> this or this patch proposal will force that discussion. >>> >>> My sunday afternoon thoughts. >>> >>> Sau! >>> >>> >>> On 7/4/19 7:24 PM, Liu, ZhipengS wrote: >>>> +Saul and Yong, >>>> >>>> Hi Saul, >>>> >>>> Below email thread may give you some clarification about your concern. >>>> >>>> Zhipeng >>>> >>>> *From:* Liu, ZhipengS >>>> *Sent:* 2019年7月5日 10:20 >>>> *To:* 'MacDonald, Eric' >>>> *Subject:* RE: About configfile patch >>>> >>>> I can see password through >>>> >>>> ps  –n >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> *From:* Liu, ZhipengS >>>> *Sent:* 2019年7月5日 10:01 >>>> *To:* 'MacDonald, Eric' >>> > >>>> *Subject:* RE: About configfile patch >>>> >>>> Hi Eric, >>>> >>>> Thanks for your clarification! >>>> >>>> BTW, how to use process listing, could you give me an example? J >>>> >>>> Zhipeng >>>> >>>> *From:*MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] >>>> *Sent:* 2019年7月4日20:26 >>>> *To:* Liu, ZhipengS >>> > >>>> *Subject:* RE: About configfile patch >>>> >>>> Hi Zhipeng, >>>> >>>> See below. >>>> >>>> Is Saul’s concern the technical debt of the config patch or the pw >>>> file in general. Seems the former. >>>> >>>> What can  do, should I speak with him ? >>>> >>>> Eric. >>>> >>>> *From:*Liu, ZhipengS [mailto:zhipengs.liu at intel.com] >>>> *Sent:* Thursday, July 04, 2019 4:30 AM >>>> *To:* MacDonald, Eric >>>> *Subject:* About configfile patch >>>> *Importance:* High >>>> >>>> Hi Eric, >>>> >>>> For configfile patch, Saul still have some concern about it and why >>>> we use a password file >>>> >>>> Anyway, I have submitted patch to upstream. >>>> >>>> https://github.com/DMTF/Redfishtool/pull/67 >>>> >>>> From code, I can see that MTC get bmc_pw through keyring. >>>> >>>> */[... Eric ] or barbican now, yes./* >>>> >>>> Then we pass the bmc_pw through extra_info to ipmi command thread >>>> >>>> */[... Eric ] Yes/* >>>> >>>> “The current implementation using IPMITOOL puts the BMC password >>>> into a short lived root privilege temp file so that it does not show >>>> up in a process listing.” >>>> >>>> Why we have to use temp file instead of showing up in a process >>>> listing?    The password can be got through process listing?  Not >>>> clear about this point. >>>> >>>> */[... Eric ] If we use the –P option when invoking ipmitool >>>> then while that command is active and someone does a process listing >>>> then they can see the –P on the process listing. This is a >>>> security issue because a non-root user can learn the BMC password >>>> for any host by just doing a process listing on the active >>>> controller./* >>>> >>>> Could you give me more detail information, thanks! >>>> >>>> From below code, it seems we have comment related code.  Does it >>>> means the file may not be removed right away even with the file open. >>>> >>>> So, still not sure which one is much more safe. >>>> >>>> */[... Eric ] The temp file is removed in the thread after execution >>>> completion or timeout./* >>>> >>>> */Example code taken from mtcThreads.cpp/* >>>> >>>> *//* >>>> >>>> *//* >>>> >>>> */There is also a garbage collection cleanup audit that ensures >>>> these temp files do not linger due to ‘say’ a process restart during >>>> command >>>> execution./* >>>> >>>> */[... Eric ] /**//* >>>> >>>> *//* >>>> >>>> * >>>> >>>> * TODO: fix or figure out why the unlink removes the file right away >>>> even >>>> >>>> *       with the file open. >>>> >>>> * >>>> >>>> ******************************************************************** >>>> *********/ >>>> >>>> */[... Eric ] The above comment was added simply because when I was >>>> coding I didn’t understand why the unlink removes the file right >>>> away./* >>>> >>>> */I think now that it was because the file was not open at the time >>>> the unlink was executed./* >>>> >>>> *//* >>>> >>>> */In any case the tmp pw file is still removed with redundancy./* >>>> >>>> int hostUtil_mktmpfile ( string hostname, string basename, string & >>>> filename, string data ) >>>> >>>> { >>>> >>>>     // buffer to hold the temporary file name >>>> >>>>     char tempBuff[MAX_FILENAME_LEN]; >>>> >>>>     int fd = -1; >>>> >>>>     memset(tempBuff,0,sizeof(tempBuff)); >>>> >>>>     if ( basename.empty() || data.empty() ) >>>> >>>>     { >>>> >>>>         slog ("%s called with one or more bad parameters >>>> (%d:%d)\n", >>>> >>>>                   hostname.c_str(), basename.empty(), >>>> data.empty()); >>>> >>>>         return (0); >>>> >>>>     } >>>> >>>>     /* add what mkstemp will make unique */ >>>> >>>>     basename.append("XXXXXX"); >>>> >>>>     // Copy the relevant information in the buffers >>>> >>>>     snprintf ( &tempBuff[0], MAX_FILENAME_LEN, "%s", >>>> basename.data()); >>>> >>>>     // Create the temporary file, this function will >>>> >>>>     // replace the 'X's with random letters >>>> >>>>     fd = mkstemp(tempBuff); >>>> >>>> // Call unlink so that whenever the file is closed or the program >>>> exits >>>> >>>>     // the temporary file is deleted. >>>> >>>>     // >>>> >>>>     // Note: Unlinking removes the file immediately. >>>> >>>>     // Commenting out. Caller must remove file. >>>> >>>>     // >>>> >>>>     // unlink(tempBuff); >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> *From:* MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] >>>> *Sent:* 2019年7月2日 19:21 >>>> *To:* Liu, ZhipengS >>> > >>>> *Subject:* WolfPass Sensors >>>> >>>> Hi Zhipeng, >>>> >>>> I've been upgrading the firmware on our set of WolfPass servers. >>>> >>>> Even with the upgrade I’ve been having a hard time reading ther >>>> server sensors through redfish. >>>> >>>> Can you send me the command(s) you use and output you see for/when >>>> dumping the sensors on your WolfPass server ? >>>> >>>> I use the following commands on the supermicro but it seems that the >>>> wolfpass servers don't support this method. >>>> >>>> redfishtool -r -u -p Chassis >>>> Thermal >>>> >>>> redfishtool -r -u -p Chassis >>>> Power >>>> >>>> Here are the firmware versions I have. I wonder if it’s my SDR version. >>>> What is yours ? >>>> >>>> *WolfPass* >>>> >>>> >>>> >>>> *BMC FW* >>>> >>>> >>>> >>>> *ME* >>>> >>>> >>>> >>>> *SDR* >>>> >>>> >>>> >>>> *Redfish Version* >>>> >>>> WolfPass 1 >>>> >>>> >>>> >>>> 1.93.870cf4f0 >>>> >>>> >>>> >>>> 04.00.04.340 >>>> >>>> >>>> >>>> 1.04 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 2 >>>> >>>> >>>> >>>> 1.93.870cf4f0 >>>> >>>> >>>> >>>> 04.00.04.340 >>>> >>>> >>>> >>>> 1.04 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 3 >>>> >>>> >>>> >>>> 1.93.870cf4f0 >>>> >>>> >>>> >>>> 04.00.04.288 >>>> >>>> >>>> >>>> 1.29 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 4 >>>> >>>> >>>> >>>> 1.29.7d703f59 >>>> >>>> >>>> >>>> 04.00.04.288 >>>> >>>> >>>> >>>> 1.29 >>>> >>>> >>>> >>>> No Redfish Support >>>> >>>> WolfPass 5 >>>> >>>> >>>> >>>> 1.29.7d703f59 >>>> >>>> >>>> >>>> 04.00.04.288 >>>> >>>> >>>> >>>> 1.29 >>>> >>>> >>>> >>>> No Redfish Support >>>> >>>> WolfPass 6 >>>> >>>> >>>> >>>> 1.29.7d703f59 >>>> >>>> >>>> >>>> 04.00.04.288 >>>> >>>> >>>> >>>> 1.29 >>>> >>>> >>>> >>>> No Redfish Support >>>> >>>> WolfPass 7 >>>> >>>> >>>> >>>> 1.29.7d703f59 >>>> >>>> >>>> >>>> 04.00.04.288 >>>> >>>> >>>> >>>> 1.29 >>>> >>>> >>>> >>>> No Redfish Support >>>> >>>> WolfPass 8 >>>> >>>> >>>> >>>> 1.43.660a4315 >>>> >>>> >>>> >>>> 04.00.04.294 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 9 >>>> >>>> >>>> >>>> 1.43.660a4315 >>>> >>>> >>>> >>>> 04.00.04.294 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 10 >>>> >>>> >>>> >>>> 1.43.660a4315 >>>> >>>> >>>> >>>> 04.00.04.294 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 11 >>>> >>>> >>>> >>>> 1.43.660a4315 >>>> >>>> >>>> >>>> 04.00.04.294 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 12 >>>> >>>> >>>> >>>> 1.43.660a4315 >>>> >>>> >>>> >>>> 04.00.04.340 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 13 >>>> >>>> >>>> >>>> 1.93.870cf4f0 >>>> >>>> >>>> >>>> 04.00.04.294 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 14 >>>> >>>> >>>> >>>> 1.93.870cf4f0 >>>> >>>> >>>> >>>> 04.00.04.294 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 15 >>>> >>>> >>>> >>>> 1.43.660a4315 >>>> >>>> >>>> >>>> 04.00.04.340 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 16 >>>> >>>> >>>> >>>> 1.43.660a4315 >>>> >>>> >>>> >>>> 04.00.04.294 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> WolfPass 17 >>>> >>>> >>>> >>>> 1.43.660a4315 >>>> >>>> >>>> >>>> 04.00.04.294 >>>> >>>> >>>> >>>> 1.43 >>>> >>>> >>>> >>>>     "RedfishVersion": "1.1.0", >>>> >>>> Cheers, >>>> >>>> Eric MacDonald, MTS, Engineering, Wind River >>>> >>>> direct 613.963.1387  fax: 613.492.7870  skype: eric.r.macdonald >>>> >>>> 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 >>>> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Ian.Jolliffe at windriver.com Fri Jul 12 15:17:23 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Fri, 12 Jul 2019 15:17:23 +0000 Subject: [Starlingx-discuss] [TSC] July 11th minutes Message-ID: <0EDD81AA-2319-4170-815C-809403544631@windriver.com> Adding missing reference: [0] - https://review.opendev.org/#/q/status:open+AND+project:%255Estarlingx/specs On 2019-07-12, 10:23 AM, "Jolliffe, Ian" wrote: 7/11 meeting ============ We started with a good discussion on specs out for review – [0] Specs - what to do with implemented folder - recommendation when R2 items are implemented, move to implemented once completed. Looking forward - do we want to rename the folders to release centric or align with rest of project nomenclature? Looking for feedback R3 Planning - captures current view of R3 planning - the team felt that this was sufficient for the initial proposed candidate for the upcoming milestone. Release team to review. In == Distributed Cloud (deferred from R2) Lead: Dariush Backup and Restore (deferred from R2) Lead: Frank Containerized Openstack clients (deferred from R2) Lead: Frank Fault Containerization (deferred from R2) Lead: Frank Up version Kubernetes and dependencies (docker, calico, helm etc.) Lead: Frank Up version Openstack to Train Lead: Bruce Intel GPU support for k8s Lead: Cindy Intel QAT support for k8s Lead: Cindy IPv6 support for PXE boot network Lead: TSN support in VM, https://review.opendev.org/655833 Lead: Forrest R2->R3 upgrade Lead: Dariush This may require a dot release Redfish support, https://review.opendev.org/#/c/668300/ Lead: Dariush Infrastructure and cluster monitoring, https://review.opendev.org/#/c/665208/ Lead:Dariush Revist - email thread need input prior to TSC meeting Python 2->3 cutover Lead: Cindy - Saul raised concerns with the amount of churn this will introduce. See email on ML for more details. We will discuss next week. Under Review ============ Containerize CEPH, https://review.opendev.org/656371 Lead: Vivian Containerize OVS-DPDK, https://review.opendev.org/655830 Lead: Forrest Prep/Ongoing Multi-OS support Lead: Abraham/Saul Intel FPGA support for k8s prep Lead: Ghada Sysvinit -> systemd conversion/cleanup Lead: Saul To Review ========= [brucej] Please review the following features that our team plans to work on: 1) IA platform feature enablement - defer Is this strictly a validation exercise or adding additional kernel enablement ? If it requires new kernel functionality what is the strategy to get that into StarlingX 2) Performance testing - Could we review this in next TSC meeting? Can we get some more detail around this ? 3) IOT device management - Could we review this in next TSC meeting? Ditto Separate disto build from flock build Flock Versioning Spec - In progress Use PBR - OpenStack's SemVer process _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From liang.a.fang at intel.com Fri Jul 12 16:06:06 2019 From: liang.a.fang at intel.com (Fang, Liang A) Date: Fri, 12 Jul 2019 16:06:06 +0000 Subject: [Starlingx-discuss] Is it possible to run tests inside Intel Message-ID: Hi I'm trying to fix two bugs that reported by WR folks, but it's hard to reproduce in Intel side manually because it happens intermittently. I'm wondering if the same test scripts can be run regularly in some server inside Intel. So if the issue happens at some time, we can root cause it in the live system. Regards Liang -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Fri Jul 12 16:11:32 2019 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 13 Jul 2019 01:11:32 +0900 Subject: [Starlingx-discuss] OpenInfra Days Vietnam 2019 (Hanoi) - Call For Presentations In-Reply-To: References: Message-ID: Hello teams, Less than 2 months and the OpenInfra Days Vietnam will happen. We love to hear the voices of Kata, StarlingX, Airship, Zuul, and other OpenStack projects team. Please join us in one day event in Hanoi this August 24th. - *Event's website:* http://day.vietopeninfra.org/ - *Call for Presentation (extended deadline 30th July):* https://forms.gle/iiRBxxyRv1mGFbgi7 - *Buy tickets:* https://ticketbox.vn/event/vietnam-openinfra-days-2019-75375 Tell me if you have any questions. Yours, Trinh On Tue, May 21, 2019 at 2:36 PM Trinh Nguyen wrote: > Hello, > > Hope you're doing well :) > > The OpenInfra Days Vietnam 2019 [1] is looking for speakers in many > different topics (e.g., container, CI, deployment, edge computing, etc.). > If you would love to have a taste of Hanoi, the capital of Vietnam, please > join us this one-day event and submit your presentation [2]. > > *- Date:* 24 AUGUST 2019 > *- Location:* INTERCONTINENTAL HANOI LANDMARK72, HANOI, VIETNAM > > Especially this time, we're honored to have the Upstream Institute > Training [3] hosted by the OpenStack Foundation on the next day (25 August > 2019). > > [1] http://day.vietopeninfra.org/ > [2] https://forms.gle/iiRBxxyRv1mGFbgi7 > [3] > https://docs.openstack.org/upstream-training/upstream-training-content.html > > See you in Hanoi! > > Bests, > > On behalf of the VietOpenInfra Group. > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Fri Jul 12 17:41:01 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Fri, 12 Jul 2019 17:41:01 +0000 Subject: [Starlingx-discuss] [Packet SIG] Meeting Notes 6/25 and 7/9 Message-ID: <41443589-74AB-4775-8ED7-3D9384826161@windriver.com> Meeting #5 - 1700UTC - Tue - 2019-07-09 Recap from last meeting Automotive use case: Diagrams - Ian to send prior to meeting #6 Geoverse - Scott identified and has a meeting with them to discuss their "Radio Test Kitchen" Cosmos - City test bed - Akraino Edge part of that project Opportunity for an Akraino Blueprint? Part of a PAWR - National Science foundation project Also happening in Las Vegas - opportunities in two other cities. More network centric focus for testing Workshops - presentation Shanghai / Kubecon John asked and Ian confirmed we are planning a STX hands on workshop in Shanghia John will add a placeholder for the resources so they don't get double booked. Greg - update on progress - wasn't on the call - update in at the next meeting in 2 weeks John asked why is Squid being used - use case? Meeting #4 - 1700UTC - Tue - 2019-06-25 Attendance Ian - Wind River / STX TSC Emmet - Codethink Scott - Packet.com - Packet labs / R&D group / Outreach James - Packet.com / Cloud Architecture Ed - Packet labs / ARM / Working with Could Native Foundation to help with infra Bill - Wind River Curtis - STX TSC John - Technical outreach for Packet / OpenStack foundation user committee Automotive use case discussion: Pratical approach - Chicken or egg problem Develop a proposal to attract interest and define the use case OEM data collection and feedback Emmet - could take the proposal to automotive players John could evangelize the use case Pittsburgh automotive test site? Argo AI Plus a Upper Manhattan location possible / potential Virtual environment of use cases proof point could be good and then move to real world environment Don't need to gate this on having vehicles? Need a first order diagram of what this would look like tie to automotive need - Ian to take a first crack at a diagram Give a shout out on the community call - about this to attract people to this use case/POC Carnegie-Mellon alumni - Uber office Are the devices at the edge/towers? For the POC what sort of device would work? reach out to a device manufacturer Federated Wireless - keeper of CBRS spectrum - likely has a list of devices simulate a car somehow Geoverse is another person who works with the spectrum may have end device recommendations Scott - will reach out Greg - update on baremetal STX on packet.com ipxe to boot the initial controller stx uses the first controller as the pxe boot server for all the other ones have to access the bios in the other servers to get them to pxe boot proplery switch to layer 2 networks have to have an extra node for nat rules, provide oem api access From maria.g.perez.ibarra at intel.com Fri Jul 12 23:38:28 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 12 Jul 2019 23:38:28 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190712 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-12 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vivian.zhu at intel.com Mon Jul 15 01:54:52 2019 From: vivian.zhu at intel.com (Zhu, Vivian) Date: Mon, 15 Jul 2019 01:54:52 +0000 Subject: [Starlingx-discuss] Is it possible to run tests inside Intel In-Reply-To: References: Message-ID: <371DF9A763E9F44F924F4A821FC070264D0F29D3@SHSMSX105.ccr.corp.intel.com> Is any log helpful to narrow down the issue rather than reproducing it since it is intermittently? Thanks! - Vivian SSG OTC NST Storage Tel: (8621)61167437 From: Fang, Liang A Sent: Saturday, July 13, 2019 12:06 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Is it possible to run tests inside Intel Hi I'm trying to fix two bugs that reported by WR folks, but it's hard to reproduce in Intel side manually because it happens intermittently. I'm wondering if the same test scripts can be run regularly in some server inside Intel. So if the issue happens at some time, we can root cause it in the live system. Regards Liang -------------- next part -------------- An HTML attachment was scrubbed... URL: From liang.a.fang at intel.com Mon Jul 15 02:19:19 2019 From: liang.a.fang at intel.com (Fang, Liang A) Date: Mon, 15 Jul 2019 02:19:19 +0000 Subject: [Starlingx-discuss] Is it possible to run tests inside Intel In-Reply-To: <371DF9A763E9F44F924F4A821FC070264D0F29D3@SHSMSX105.ccr.corp.intel.com> References: <371DF9A763E9F44F924F4A821FC070264D0F29D3@SHSMSX105.ccr.corp.intel.com> Message-ID: Hi Vivian and all The log attached in Launchpad is not collected in time and valid info was lost. Hope reporter can provide the log before auto-recovered. I commented in: https://bugs.launchpad.net/starlingx/+bug/1831635 Although the issue happens intermittently, but it can always be reproduced once it happens. So it would be easier to root cause it if we can login to the live system. I suppose in Starlingx all the cinder related log are located in: /var/log/containers/ Something like: cinder-api-...log cinder-scheduler-...log cinder-volume-...log Please anybody correct me if I misunderstand this. Thanks. Regards Liang From: Zhu, Vivian Sent: Monday, July 15, 2019 9:55 AM To: Fang, Liang A ; starlingx-discuss at lists.starlingx.io Subject: RE: Is it possible to run tests inside Intel Is any log helpful to narrow down the issue rather than reproducing it since it is intermittently? Thanks! - Vivian SSG OTC NST Storage Tel: (8621)61167437 From: Fang, Liang A > Sent: Saturday, July 13, 2019 12:06 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Is it possible to run tests inside Intel Hi I'm trying to fix two bugs that reported by WR folks, but it's hard to reproduce in Intel side manually because it happens intermittently. I'm wondering if the same test scripts can be run regularly in some server inside Intel. So if the issue happens at some time, we can root cause it in the live system. Regards Liang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ran1.an at intel.com Mon Jul 15 08:36:43 2019 From: ran1.an at intel.com (An, Ran1) Date: Mon, 15 Jul 2019 08:36:43 +0000 Subject: [Starlingx-discuss] proposal for Intel GPU K8s device plugin support in StarlingX Message-ID: <9BAB5B7CAF57C3459E4636391F1071CE0530F2DA@shsmsx102.ccr.corp.intel.com> Hi All here is the proposal for enabling intel-gpu-plugin on StarlingX , welcome suggestion and advise. The background: As a part of resource management, kubernetes provides a device plugin framework [2] for vendors to advertise their resources to the kubelet since version 1.8. StarlingX has already supported SR-IOV CNI plugins now [3]. Intel-gpu-plugins is a device plugin implementation [4] for intel GPU (with driver i915). Users could deploy their pods with Intel GPU resource requests or limits, if intel-gpu-plugins was integrated into StarlingX. proposal: Deploy intel-gpu-plugins as a daemon set with node selector "intelgpu: enabled". Kubernetes label "intelgpu: enabled" will be set automatically once the node detected supported GPU device. Details are shown as follows: 1. Build StarlingX plugin docker image based on [5], the implement in starlingx are [6] and [7] 2. Deploy Intel-gpu-plugins daemon set in tasks "bringup_kubemaster" after kubernetes master has been initialized during ansible bootstrap process. Add value "import_plugins" and value list "kube_plugins" as condition of deploying Intel-gpu-plugins daemon set, so user could determine whether Intel-gpu-plugins would be enabled. Create file "/etc/platform/enabled_kube_plugins" and write list "kube_plugins" into the file after active Intel-gpu-plugin daemon if "import_plugins" is true. Partical Implement is [8] 3. Detect supported GPU device with the help of sysinv agent and request to set kubernetes label "intelgpu: enabled" for specific node by calling sysinv conductor rpcapi. Sysinv conductor will check file "/etc/platform/enabled_kube_plugins", and set kubernetes label if the file is exist and intel-gpu-plugins is in list. Partial implement is [9] [1] https://storyboard.openstack.org/#!/story/2005937 [2] https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md [3] https://review.opendev.org/#/c/655495/ [4] https://github.com/intel/intel-device-plugins-for-kubernetes [5] https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/cmd/gpu_plugin/README.md [6] https://review.opendev.org/668803 [7] https://review.opendev.org/668808 [8] https://review.opendev.org/666510 [9] https://review.opendev.org/666511 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Jul 15 13:06:23 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 15 Jul 2019 13:06:23 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Weekly Meeting Message-ID: Please note we will hold a Containerization meeting today. Team Agenda for July 15 meeting: 1. 41 stx.2.0 gating bugs. Will discuss status for the following higher priority LPs: 1832781 AIO standard profile: Incorrect Pod affinity [Jim Gauld] 1833746 Some helm charts can override other helm charts [Bob Church] 1833096 Instances crash on each system application-update operation [Gerry Kopec] 1833609 stx-openstack application stuck at processing chart: osh-openstack-ceph-rgw after unlock standby controller [Lin Shuicheng] 1830737 & 1830793 Openstack cmds not working intermittently [Stefan Dinescu] 1833323 Openstack manifest apply hung applying cinder manifest Edit [Tee Ngo] 1834796 AIO: Too many rabbit threads Edit [Bin Yang] Also there are another 33 medium priority bugs many with no updates since May or April. All primes are requested to identify a forecast for when you can have a proposed fix. 2. Remaining SB status: SBs with remaining to do items: 2005860 Upversion container components (armada, docker, kubernetes) [Al Bailey] 2002843 K8s Platform Support [Jerry Sun - last task was for k8s API authentication; then feature acceptance] SBs with no to do items, but items with code out for review: 2004760 Containerize the ironic service [Mingyuan Qi] 2003909 HELM Chart Override Generation [Gerry Kopec/Boxiang Zhu] 2004764 Removal of bare metal Openstack related code & 2005358 stx.config sysinv container cleanup [Al Bailey] 3. Other topics? Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4257 bytes Desc: not available URL: From haochuan.z.chen at intel.com Mon Jul 15 14:53:11 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 15 Jul 2019 14:53:11 +0000 Subject: [Starlingx-discuss] question about LP1828056 Message-ID: <56829C2A36C2E542B0CCB9854828E4D8562497DC@CDSMSX102.ccr.corp.intel.com> Hi folks I work on LP1828056, https://bugs.launchpad.net/starlingx/+bug/1828056 Issue description: After system deployed, begin to deploy openstack application $ system application-upload $ system helm-override-update --set conf.panko.database.event_time_to_live=24828899 stx-openstack panko openstack $ system application-apply stx-openstack When deploy chart for osh-openstack-panko, it will deploy fail, with job panko-db-sync. controller-0:~$ kubectl get jobs -n openstack | grep panko panko-db-init 1/1 37s 107m panko-db-sync 0/1 107m 107m panko-events-cleaner-1563196200 0/1 90m 90m panko-ks-endpoints 1/1 67s 107m panko-ks-service 1/1 39s 107m panko-ks-user 1/1 62s 107m check pod log, you will find in panko.con, "event_time_to_live = 2.4828899e+07", but it request a integer for db-sync. /var/lib/kubelet/pods/73b84006-a6ff-11e9-ae87-525400d66765/volumes/kubernetes.io~secret/db-sync-conf/..2019_07_15_12_52_54.229165405/panko.conf event_time_to_live = 2.4828899e+07 In /opt/platform/helm/19.01/stx-openstack/1.0-17/openstack-panko.yaml values: conf: panko: database: event_time_to_live: 24828899 So this is maybe tiller's issue. I find if set event_time_to_live to more than 6 digit value, it will generate a float value in panko.conf. Any idea about how to fix this issue? Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Mon Jul 15 16:58:25 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Mon, 15 Jul 2019 16:58:25 +0000 Subject: [Starlingx-discuss] Procedure to configure rules to enable access to port in controller Message-ID: <788c244328789972f5fb1e92c27953c5de943bab.camel@intel.com> Hi, I'm trying to enable a monitoring service in the controller. After some debugging I realize that the access rules are managed by calico. So, I'm trying to apply the following policy: apiVersion: "crd.projectcalico.org/v1" kind: GlobalNetworkPolicy metadata: name: custom-testing-port spec: selector: "has(iftype) && iftype == 'oam'" order: 100 applyOnForward: false types: - Ingress - Egress ingress: - action: Allow ipVersion: 4 protocol: TCP destination: ports: [9100] egress: - action: Allow ipVersion: 4 protocol: TCP And I run this with: $ kubectl apply -f policy.yaml Then I used this command to enable the access to the port. $ sudo iptables -A INPUT -p tcp -m multiport --dports 9100 -m comment --comment "Testing port expose" -j ACCEPT This seems to be the same procedure as described here[0] to enable access to horizon. If I do a `iptables-save | grep 9100` I can see the rules: -A cali-pi-custom-testing-port -p tcp -m comment --comment "cali:htmNJYyMe7qlodKr" -m multiport --dports 9100 -j MARK --set-xmark 0x10000/0x10000 -A cali-pi-custom-testing-port -p tcp -m comment --comment "cali:htmNJYyMe7qlodKr" -m multiport --dports 9100 -j MARK --set-xmark 0x10000/0x10000 -A INPUT -p tcp -m multiport --dports 9100 -m comment --comment "Testing port expose" -j ACCEPT -A cali-pi-custom-testing-port -p tcp -m comment --comment "cali:htmNJYyMe7qlodKr" -m multiport --dports 9100 -j MARK --set-xmark 0x10000/0x10000 However, when I try to run a curl from an external host I get a "A communication error occurred: 'No route to host'" error. If I run this same command from inside the controller I get the data. $ curl http://10.10.10.3:9100/metrics I'm sure I'm missing a step, does anybody knows what I can do next to enable the port access? - [0] https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Generate_the_stx-openstack_application_tarball From tingjie.chen at intel.com Mon Jul 15 17:14:57 2019 From: tingjie.chen at intel.com (Chen, Tingjie) Date: Mon, 15 Jul 2019 17:14:57 +0000 Subject: [Starlingx-discuss] Proposal for Ceph containerization for StarlingX Message-ID: Hi, There is proposal for Ceph containerization for StarlingX, welcome review and comments. The background: Ceph is the standard persistent storage backend for starlingx, this story is to implement ceph containerization. In the proposal, we discuss the benefit of containerized Ceph, also give solution and edit design document for the implementation. BP: https://review.opendev.org/#/c/656371 Design doc: https://docs.google.com/document/d/1lnAZSu4vAD4EB62Mk18mCgM7aAItl26sJa1GBf9DJz4/edit?usp=sharing SB: https://storyboard.openstack.org/#!/story/2005527 Thanks, Tingjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatindrax.shashi at intel.com Fri Jul 12 09:08:37 2019 From: yatindrax.shashi at intel.com (Shashi, YatindraX) Date: Fri, 12 Jul 2019 09:08:37 +0000 Subject: [Starlingx-discuss] Update the StarlingX 2.0 wiki about Additional storage need Message-ID: Hi team, While I was trying to install starlingX 2.0 beta release of date July 07 in simplex mode(All in One), I found that I need additional disk for the ceph storage OSD but in the wiki of StarlingX 2.0 it is written that zero or more. I checked with the partition of disk, ceph doesnot allow to create host storage and it fails showing error "Invalid data: failed to create storage"as attached image. I came to know that we need additional disk to the root disk for the ceph storage. Relevant person , please update this part as well in wiki. Mit freundlichen Grüßen/ with best regards, Yatindra Shashi IoT Technical Solutions Engineer On behalf of Developer Relations Division, Intel Corporation Munich, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ceph nvme partion OSD add fail.JPG Type: image/jpeg Size: 195395 bytes Desc: ceph nvme partion OSD add fail.JPG URL: From Ghada.Khalil at windriver.com Mon Jul 15 22:27:13 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 15 Jul 2019 22:27:13 +0000 Subject: [Starlingx-discuss] stx.3.0 Milestone-1 status Message-ID: <151EE31B9FCCA54397A757BC674650F0C156C129@ALA-MBD.corp.ad.wrs.com> Hello all, The stx.3.0 milestone-1 is planned for this week. The criteria for the milestone are as follows: - Release priorities and major features defined. - High level resourcing secured. Reference: https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Milestones Based on the candidate feature list that has been reviewed with the TSC and project leads from the community, the release planning team feels that we are in a good position for the milestone. The candidate list is available at: https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 Additional features/specs can still be proposed/reviewed. The next milestone is spec freeze which is currently scheduled for the week of August 12. At that point, no new specs will be considered for stx.3.0. The milestone will be more formally reviewed in the next community meeting on July 17/2019. Regards, Ghada From maria.g.perez.ibarra at intel.com Mon Jul 15 22:27:18 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 15 Jul 2019 22:27:18 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190715 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-15 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Tue Jul 16 01:20:21 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Tue, 16 Jul 2019 01:20:21 +0000 Subject: [Starlingx-discuss] Procedure to configure rules to enable access to port in controller In-Reply-To: <788c244328789972f5fb1e92c27953c5de943bab.camel@intel.com> References: <788c244328789972f5fb1e92c27953c5de943bab.camel@intel.com> Message-ID: Erich, The "No route to host" sounds like you don't have the route set to 10.10.10.3 in your external host. Check with "ip route" to see if 10.10.10.0/24 route or default route were set. I suppose you are working on a bare metal system. Mingyuan -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, July 16, 2019 0:58 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Procedure to configure rules to enable access to port in controller Hi, I'm trying to enable a monitoring service in the controller. After some debugging I realize that the access rules are managed by calico. So, I'm trying to apply the following policy: apiVersion: "crd.projectcalico.org/v1" kind: GlobalNetworkPolicy metadata: name: custom-testing-port spec: selector: "has(iftype) && iftype == 'oam'" order: 100 applyOnForward: false types: - Ingress - Egress ingress: - action: Allow ipVersion: 4 protocol: TCP destination: ports: [9100] egress: - action: Allow ipVersion: 4 protocol: TCP And I run this with: $ kubectl apply -f policy.yaml Then I used this command to enable the access to the port. $ sudo iptables -A INPUT -p tcp -m multiport --dports 9100 -m comment --comment "Testing port expose" -j ACCEPT This seems to be the same procedure as described here[0] to enable access to horizon. If I do a `iptables-save | grep 9100` I can see the rules: -A cali-pi-custom-testing-port -p tcp -m comment --comment "cali:htmNJYyMe7qlodKr" -m multiport --dports 9100 -j MARK --set-xmark 0x10000/0x10000 -A cali-pi-custom-testing-port -p tcp -m comment --comment "cali:htmNJYyMe7qlodKr" -m multiport --dports 9100 -j MARK --set-xmark 0x10000/0x10000 -A INPUT -p tcp -m multiport --dports 9100 -m comment --comment "Testing port expose" -j ACCEPT -A cali-pi-custom-testing-port -p tcp -m comment --comment "cali:htmNJYyMe7qlodKr" -m multiport --dports 9100 -j MARK --set-xmark 0x10000/0x10000 However, when I try to run a curl from an external host I get a "A communication error occurred: 'No route to host'" error. If I run this same command from inside the controller I get the data. $ curl http://10.10.10.3:9100/metrics I'm sure I'm missing a step, does anybody knows what I can do next to enable the port access? - [0] https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Generate_the_stx-openstack_application_tarball _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cheng1.li at intel.com Tue Jul 16 01:41:18 2019 From: cheng1.li at intel.com (Li, Cheng1) Date: Tue, 16 Jul 2019 01:41:18 +0000 Subject: [Starlingx-discuss] stx.3.0 Milestone-1 status In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C156C129@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C156C129@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Brent, Matt, could you please to review the OVS-DPDK containerization spec? Thanks https://review.opendev.org/#/c/655830/ Thanks, Cheng -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, July 16, 2019 6:27 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx.3.0 Milestone-1 status Hello all, The stx.3.0 milestone-1 is planned for this week. The criteria for the milestone are as follows: - Release priorities and major features defined. - High level resourcing secured. Reference: https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Milestones Based on the candidate feature list that has been reviewed with the TSC and project leads from the community, the release planning team feels that we are in a good position for the milestone. The candidate list is available at: https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 Additional features/specs can still be proposed/reviewed. The next milestone is spec freeze which is currently scheduled for the week of August 12. At that point, no new specs will be considered for stx.3.0. The milestone will be more formally reviewed in the next community meeting on July 17/2019. Regards, Ghada _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Tue Jul 16 03:10:22 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 16 Jul 2019 03:10:22 +0000 Subject: [Starlingx-discuss] About rebase to the new Nova branch f/stein.2 Message-ID: <93814834B4855241994F290E959305C7530AE77A@SHSMSX104.ccr.corp.intel.com> Hi Scott, Since below patch already merged, could you help to update related docker image in docker hub? https://review.opendev.org/#/c/669053/ So that we can use latest nova to verify VM live migration related issues. Thanks! zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jul 16 06:39:13 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 16 Jul 2019 06:39:13 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 In-Reply-To: References: <2f9fe484-970e-0375-f1f1-75b33b532cc2@linux.intel.com> <86a87855-3e07-4a6d-5eb0-847ecf719b57@gmail.com> <6d229ffa-b1a1-b110-0f7f-32e25bd7493a@intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FD3003@SHSMSX104.ccr.corp.intel.com> <5c77d8a6-de6f-0b3a-10bd-9edb0ece08eb@linux.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC15029D8@ALA-MBD.corp.ad.wrs.com> Message-ID: Created Story [1] for python-smartpm and [2] for rpm-python. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, July 12, 2019 10:22 AM To: Sun, Austin ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 On 7/11/19 6:16 PM, Sun, Austin wrote: > Hi Penny: > Thanks a lot your info. > Story [1] is using to track python2to3 for stx.3.0 . > Task 35794 was created for upgrade requests-toolbelt. > Task 35795 for replacing rpm_python and Task 35796 for > replacing python-smartpm replacing python-smartpm probably need a story on it's own, it will completely change the patch update process. Sau! > > [1] https://storyboard.openstack.org/#!/story/2006158 > > Thank > BR > Austin Sun. > > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: Friday, July 12, 2019 5:12 AM > To: Saul Wold ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > I think I can use this module in place of the rpm one: > https://pypi.org/project/version_utils/ > > It looks like this provides an equivalent to rpm.labelCompare that should allow me to drop rpm-python. > > > -----Original Message----- > From: Penney, Don > Sent: Thursday, July 11, 2019 3:50 PM > To: 'Saul Wold'; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. > > We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? > > I can also look at the current use of the rpm module in patching and look for alternatives. > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, July 11, 2019 3:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > > > On 7/10/19 7:03 AM, Sun, Austin wrote: >> Hi All: >> The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 >> Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. >> As sync in non-OpenStack distro meeting. >> We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . >> So below 6 packages are coming 3rd party which might be not python2to3 compliance. >> >> Package | who is using >> openvswitch | ovs >> python-cephfs | ceph >> python-smartpm | standalone package >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller > > Can you identify replacement python3 packages for any of these. > > I know we found out that smartpm is used for the patch process, I know > that smartpm is also an older project that does not have any upstream > support any further, so that will require a fair amount of work. > > Sau! > >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin [mailto:austin.sun at intel.com] >> Sent: Wednesday, July 10, 2019 4:03 PM >> To: Xie, Cindy ; Hu, Yong ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Hi All: >> New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. >> There are 11 packages not from centos but including python and may not be compatiable python2 and python3. >> Package | who is using >> openvswitch | ovs >> python-aniso8601 | keystone >> python-cephclient | ceph >> python-cephfs | ceph >> python-django-bash-completion | sysinv >> python-smartpm | standalone package >> python-unittest2 | sysinv >> python-XStatic-jquery-ui | stx-gui >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller >> >> >> I will continue check those 11 packages . >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1808073 >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin >> Sent: Thursday, July 4, 2019 11:43 AM >> To: Xie, Cindy ; Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] Python2 -> Python3 >> >> Hi Cindy: >> Yes. we will do it and update sheet. >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Xie, Cindy [mailto:cindy.xie at intel.com] >> Sent: Thursday, July 4, 2019 11:37 AM >> To: Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Austin, >> Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: >> >> In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. >> >> Thanks. - cindy >> >> -----Original Message----- >> From: Yong Hu [mailto:yong.hu at intel.com] >> Sent: Thursday, July 4, 2019 11:25 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? >> >> In my view the best solution is to wait for CentOS 8.0 :-) >> >> >> On 03/07/2019 2:55 PM, Dean Troyer wrote: >>> On 7/3/19 4:07 PM, Saul Wold wrote: >>>> The current proposal seems to be to completely convert the base >>>> CentOS7.6 system level python to use python3, this carries a high >>>> risk factor as changing out all system-level python code could have >>>> a cascade effect on system functionality and additional dependencies. >>>> While >>> >>> Changing the distro/system Python version out from under the rest of >>> the distro seems like an enormous time sink, much less a significant >>> reliability risk. >>> >>>> A better solution would be to build python3 and the associated >>>> requirements from the existing RHEL EPEL (Extra Packages for >>>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>>> This version correctly installs in a segregated directory tree. >>> >>> We would probably want to run a significant subset of the upstream >>> OpenStack testing on this combination as it is not (AFAIK) tested there. >>>  But this is true of any runtime + distro combination that is not >>> in the fairly short list of combinations that upstream OpenStack >>> actively tests. >>> >>>> Another option would be to delay the actual python2 conversion to >>>> StarlingX 4.0, the OpenStack Train release will still support python2. >>> >>> One downside to this is it leaves us no margin to defer the change >>> again, this is our second chance as it were.  OpenStack U (as of >>> now) is likely to drop py2 support as a guarantee across-the-board. >>> >>>> There is still work that is needed beyond the conversion of the >>>> python code itself to things like RPM specfiles data and other >>>> source code (such as, C code that has #includes of python2.7). It's >>>> not clear to me how much functional testing with python3 has >>>> occurred for the flock beyond what Dean has started with devstack. >>> >>> I managed to get the fault services running on py3, sysinv fell over >>> during the dbsync in my quick post-PTG trial run.  That is as far as >>> I took it.  Anyone who wants to try can pick out the local.conf I >>> posted [0] >>> >>> dt >>> >>> [0] http://paste.openstack.org/show/753844/ >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From zhang.kunpeng at 99cloud.net Tue Jul 16 07:54:04 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Tue, 16 Jul 2019 15:54:04 +0800 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Message-ID: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> Hi guys, Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. Below is the StarlingX environment. 1. stx1.0 version, bootimage[1] 2. Simplex deployment 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso Kunpeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Tue Jul 16 08:49:13 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 16 Jul 2019 08:49:13 +0000 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> Message-ID: Hi Kunpeng, When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: 2 network port without DPDK > VM 2 network port with DPDK > Data Network 1 network port without DPDK > OAM Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 3:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi guys, Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. Below is the StarlingX environment. 1. stx1.0 version, bootimage[1] 2. Simplex deployment 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso Kunpeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Tue Jul 16 09:39:37 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Tue, 16 Jul 2019 17:39:37 +0800 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> Message-ID: <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> Hi Chenjie, Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. And do you have some easy ways to disable dpdk of openvswitch in stx1.0? I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. Thanks Kunpeng > On Jul 16, 2019, at 16:49, Xu, Chenjie wrote: > > Hi Kunpeng, > When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. > > Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: > 2 network port without DPDK > VM > 2 network port with DPDK > Data Network > 1 network port without DPDK > OAM > > Best Regards, > Xu, Chenjie >   <> > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Tuesday, July 16, 2019 3:54 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi guys, > > Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. > > Below is the StarlingX environment. > > 1. stx1.0 version, bootimage[1] > 2. Simplex deployment > 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. > 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. > > When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. > > It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. > > I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. > > [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso > > Kunpeng > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jul 16 12:40:19 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 16 Jul 2019 12:40:19 +0000 Subject: [Starlingx-discuss] Community Call (July 17, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AAF2D4@ALA-MBD.corp.ad.wrs.com> Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - sanity - any red sanities since last Community meeting? - reviews in need of attention - defect trend / gating launchpads - bitergia update: see https://etherpad.openstack.org/p/stx-bitergia - first contact update - mailing list responsiveness - see https://etherpad.openstack.org/p/stx-first-contact (at the bottom) - open actions from previous meetings Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190717T1400 From cindy.xie at intel.com Tue Jul 16 12:44:47 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 16 Jul 2019 12:44:47 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/17 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE0FC0@SHSMSX104.ccr.corp.intel.com> All, Below are the agenda I proposed, please feel free to add more: Agenda for 7/17 meeting: - continue Python2to3 plan review (Austin) - kernel minor version upgrade to kernel-3.10.0-957.21.3 (Haitao/Shuai) - Ceph containerization plan review (Tingjie) - stx 2.0 bug triage/review (Cindy) - opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 17, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Don.Penney at windriver.com Tue Jul 16 13:58:20 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 16 Jul 2019 13:58:20 +0000 Subject: [Starlingx-discuss] About rebase to the new Nova branch f/stein.2 In-Reply-To: <93814834B4855241994F290E959305C7530AE77A@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C7530AE77A@SHSMSX104.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC152965E@ALA-MBD.corp.ad.wrs.com> The formal CENGN build will generate new images and publish to docker hub weekly, as part of the Monday night build. The new images built last night pulled the stx/stein.2 branch for the nova build. From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Monday, July 15, 2019 11:10 PM To: Little, Scott Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] About rebase to the new Nova branch f/stein.2 Hi Scott, Since below patch already merged, could you help to update related docker image in docker hub? https://review.opendev.org/#/c/669053/ So that we can use latest nova to verify VM live migration related issues. Thanks! zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jul 16 14:03:52 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 16 Jul 2019 14:03:52 +0000 Subject: [Starlingx-discuss] About rebase to the new Nova branch f/stein.2 In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC152965E@ALA-MBD.corp.ad.wrs.com> References: <93814834B4855241994F290E959305C7530AE77A@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC152965E@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C7530AE83A@SHSMSX104.ccr.corp.intel.com> Thanks Don! From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月16日 21:58 To: Liu, ZhipengS ; Little, Scott Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] About rebase to the new Nova branch f/stein.2 The formal CENGN build will generate new images and publish to docker hub weekly, as part of the Monday night build. The new images built last night pulled the stx/stein.2 branch for the nova build. From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Monday, July 15, 2019 11:10 PM To: Little, Scott Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] About rebase to the new Nova branch f/stein.2 Hi Scott, Since below patch already merged, could you help to update related docker image in docker hub? https://review.opendev.org/#/c/669053/ So that we can use latest nova to verify VM live migration related issues. Thanks! zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Tue Jul 16 18:15:29 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 16 Jul 2019 13:15:29 -0500 Subject: [Starlingx-discuss] [Nova][Placement] NUMA topology testing in Placement Message-ID: The OpenStack Placement team is looking for some input on the sort of NUMA topology that they should employ in the Placement tests. See [0], the second paragraph under "## Cleanup", reproduced below. In short, they are looking for input as to the NUMA topology that the test suite should create to run queries against. Any thoughts that we (StarlingX community) can offer would be helpful. I am thinking that at a minimum we have some topologies used in our test suites that we could share, things like the number of cores/sockets/devices and the affinities that go with them? Replies here will be forwarded to the right places or reply directly to cdent's email [0] on openstack-discuss from last week. Thanks dt [0] http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007716.html > As mentioned last week, one of the important cleanup tasks that is not > yet in progress is updating the > [gabbit](https://opendev.org/openstack/placement/src/branch/master/gate/gabbits/nested-perfload.yaml) > > that creates the nested topology that's used in nested performance > testing. The topology there is simple, unrealistic, and doesn't > sufficiently exercise the several features that may be used during a > query that desires a nested response. This needs to be someone who > is more closely related to real world use of nested than me. efried? > gibi? -- Dean Troyer dtroyer at gmail.com From jimmy at openstack.org Tue Jul 16 18:40:07 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 16 Jul 2019 13:40:07 -0500 Subject: [Starlingx-discuss] Community Voting is now Open! Message-ID: <5D2E1A07.9040808@openstack.org> Community voting for the Open Infrastructure Summit Shanghai sessions is open! You can VOTE HERE , but what does that mean? Now that the Call for Presentations has closed, all submissions are available for community vote and input. After community voting closes, the volunteer Programming Committee members will receive the results to review to help them determine the final selections for Summit schedule. While community votes are meant to help inform the decision, Programming Committee members are expected to exercise judgment in their area of expertise and help ensure diversity of sessions and speakers. View full details of the session selection process . In order to vote, you need an OSF community membership. If you do not have an account, please create one by going to openstack.org/join . If you need to reset your password, you can do that here . Hurry, voting closes Monday, July 22 at 11:59pmPacific Time (Tuesday, July 23 at 6:59 UTC). Continue to visit https://www.openstack.org/summit/shanghai-2019 for all Summit-related information. REGISTER Register for the Summit before prices increase on August 7th! VISA APPLICATION PROCESS Make sure to secure your Visa soon. More information about the Visa application process. TRAVEL SUPPORT PROGRAM August 8th is the last day to submit applications. Please submit your applications by 11:59pm Pacific Time (August 9th at 6:59am UTC). If you have any questions, please email summit at openstack.org . Cheers, Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Tue Jul 16 20:42:32 2019 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 16 Jul 2019 13:42:32 -0700 Subject: [Starlingx-discuss] Build Layering and refactoring of repos Message-ID: <0f8ba5a5-c2ef-520d-6a9f-3fc4a07b02ca@linux.intel.com> Hi Scott, I was reviewing the google spreadsheet [0] you shared during the Build Sub-team [1] meeting last week. I think this is a good direction and I am beginning to understand your logic and urgency around making the changes. I have some comments some of the moves. 1) Did you factor in any of Dean's thoughts about reorgs? email [2] / ethercalc [3] 2) can we remove the stx- prefix from the new repos to start with instead of propagating that given we are inside the starlingx/ namespace alread? 3) not sure if "compile" is right name the layer of packages (go, python, rpm, and bash), does bash really belong here, I don't think we depend on it for the build, do we ? Is there a specific modification to bash that build specific? 4) openstack-helm* I believe is used by stx-platform-helm, at least we saw that dependency with the MultiOS/openSUSE specfiles. 5) Maybe a future move is getting integ/puppet into the toplevel puppet repo and ultimately part of ansible-playbooks if the plan is to convert to ansible. Thanks Sau! [0] https://docs.google.com/spreadsheets/d/1zURL1UlDST8lnvw3dMlNWN6pkLX6EVF6TDBwNR9TQik/edit#gid=1697053891 [1] https://etherpad.openstack.org/p/stx-build [2] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-May/004597.html [3] https://ethercalc.openstack.org/stx-repo-org From maria.g.perez.ibarra at intel.com Tue Jul 16 21:19:26 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 16 Jul 2019 21:19:26 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190716 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-16 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Jul 16 22:49:53 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 16 Jul 2019 22:49:53 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 7/16/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID="20190712T013000Z" from (link) ---------------------------------------------------------------------- Overall Results: Total = 415 Pass = 250 Fail = 8 Blocked = 34 Not Run = 105 Obsolete = 18 Total executed = 292 Pass Rate = 96.89% Formula used : Pass Rate = pass * 100 / (pass + fail) ---------------------------------------------------------------------- Results per Domain: Regression - AIO-SX 25 PASS | 1 Obsolete Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM Regression - HA 2 4 PASS | 1 FAIL Regression - Heat 12 PASS | 1 Obsolete Regression - Horizon 4 PASS Regression - Install and Config 5 PASS Regression - Maintenance 7 PASS | 1 FAIL Regression - Networking 94 PASS | 3 FAIL | 19 BLOCKED | 14 Obsolete Regression - Nova 2 PASS | Regression - Security 34 PASS | 1 FAIL | 6 BLOCKED | 1 Obsolete Regression - Storage Regression - Inventory 29 PASS | 1 FAIL System Test 19 PASS | 1 FAIL | 9 BLOCKED | 1 Obsolete --------------------------------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 System account doesn't block after invalid login attempts https://bugs.launchpad.net/starlingx/+bug/1814345 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 virsh only listing one volume, even though there was an additional volume attached after instantiation https://bugs.launchpad.net/starlingx/+bug/1834194 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 Device UUID is missing when boot up VM with block device https://bugs.launchpad.net/starlingx/+bug/1835282 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 ceph-mgr restful plugin error preventing platform-integ-apps from auto applying https://bugs.launchpad.net/starlingx/+bug/1835938 Instance created with a flat network spawns in error state https://bugs.launchpad.net/starlingx/+bug/1835965 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 VM fail to live migrate after evacuation https://bugs.launchpad.net/starlingx/+bug/1836402 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 Total Bugs: 17 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?pli=1#gid=322455033 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From Angie.Wang at windriver.com Tue Jul 16 23:24:01 2019 From: Angie.Wang at windriver.com (Wang, Jing (Angie)) Date: Tue, 16 Jul 2019 23:24:01 +0000 Subject: [Starlingx-discuss] question about LP1828056 In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D8562497DC@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D8562497DC@CDSMSX102.ccr.corp.intel.com> Message-ID: Hi Marin, In the helm chart values.yaml , if the value has only numbers(at least 8), helm will cast it to float64. We need to add single/double quotes to the overrided value(if it contains only numbers) to force it to be a string, then helm would be able to parse it correctly. Ie... system helm-override-update -value panko.yaml cat panko.yaml conf: panko: database: event_time_to_live: "555555555555" But I cannot override it with quotes added via system helm-override-update with --set option. The openstack-helm LP you created https://bugs.launchpad.net/openstack-helm/+bug/1836744 is invalid. This is not related to openstack-helm. There had couples of helm issues regarding to the float number, it fixed in commit https://github.com/helm/helm/pull/3599 So if you need to use -set-string option to override the value(if it contains only numbers and at least 8) Ie. helm install panko -set-string conf.panko.database.event_time_to_live=24828899 Thanks, -Angie From: Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] Sent: July-15-19 10:53 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] question about LP1828056 Hi folks I work on LP1828056, https://bugs.launchpad.net/starlingx/+bug/1828056 Issue description: After system deployed, begin to deploy openstack application $ system application-upload $ system helm-override-update --set conf.panko.database.event_time_to_live=24828899 stx-openstack panko openstack $ system application-apply stx-openstack When deploy chart for osh-openstack-panko, it will deploy fail, with job panko-db-sync. controller-0:~$ kubectl get jobs -n openstack | grep panko panko-db-init 1/1 37s 107m panko-db-sync 0/1 107m 107m panko-events-cleaner-1563196200 0/1 90m 90m panko-ks-endpoints 1/1 67s 107m panko-ks-service 1/1 39s 107m panko-ks-user 1/1 62s 107m check pod log, you will find in panko.con, "event_time_to_live = 2.4828899e+07", but it request a integer for db-sync. /var/lib/kubelet/pods/73b84006-a6ff-11e9-ae87-525400d66765/volumes/kubernetes.io~secret/db-sync-conf/..2019_07_15_12_52_54.229165405/panko.conf event_time_to_live = 2.4828899e+07 In /opt/platform/helm/19.01/stx-openstack/1.0-17/openstack-panko.yaml values: conf: panko: database: event_time_to_live: 24828899 So this is maybe tiller's issue. I find if set event_time_to_live to more than 6 digit value, it will generate a float value in panko.conf. Any idea about how to fix this issue? Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Jul 16 23:41:17 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 16 Jul 2019 23:41:17 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 07/16/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CE58C7A@FMSMSX114.amr.corp.intel.com> Agenda for 07/16 Attendees: Cristopher, JC, Jose, JP, Numan, Elio, Fernando, Richo, Ada, Yang, Yong 1. Sanity status - Cristopher Having issues with the deletion of a snapshot, but seen only on the automated suite. Manually it works. Working on debug. WR seeing problems with the re-apply of stx-openstack. Currently, WR and Intel are running sanity. We could optimize the run of the sanity splitting the execution. Both sides will share the titles of the tests and a description in order to work on this. JC will work on this from Intel side. Numan, Yang, Peng on WR side. Don't forget to continue covering different environments. 2. Regression testing status stx.2.0 - Elio, Numan Elio - Monday we started with the second pass - the general tracker has been updated, some tests cases marked as obsolete. 5 failures, 49 TC passed, 27 blocked. For this pass, focus is on failures and blocked TC. Alarms TC are obsolete. Some SRIOV tests still blocked (PT) Automated regression - Duplex, external and standard config have pending tests cases. Elio, please send the domains covered by automated regression in order to build the report. Help required on encryption, NUMA. Email asking questions has been sent. These TC will be as 'not run' until we got answers. Total tests number will increase due to the addition of IPv6 and feature testing (for regression). Numan - not as fast as we would like. 131 TC pending (regression and new features for regression. Ada and Numan will sync offline for reviewing how can we help each other. Ada to review regarding resources. Automated: making updates on the networking domain due to the changes recently introduced. One regression ran this weekend, working on analyzing results. Please send the info before the release meeting. Ada to add the 'feature-stx.2.0' tab into the summary. 3. Consolidation of manual / automated regression into one report - Numan and Ada Ada to work with Numan on finding a time slot for working on this. 4. Pytest framework in the repo - status A new patch set uploaded - https://review.opendev.org/#/c/665419/ Previous comments sent are fixed by this patch. From Intel side, we got approval for submitting the code we have to the repo. We will work on the following days to get it out. 5. Opens Numan - It's time to bring the dashboard conversation back. Last proposal was to deploy it locally and review how it works. Ada to check on resources for working on this. Yang - do we have a public storage place? The suite requires a place from setting files that can be downloaded. Proposal: to create a stx-test google user in order to consolidate info in there. Talk about this on the community meeting - conversation about it has been happening there. From haochuan.z.chen at intel.com Wed Jul 17 03:00:37 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Wed, 17 Jul 2019 03:00:37 +0000 Subject: [Starlingx-discuss] question about LP1828056 In-Reply-To: References: <56829C2A36C2E542B0CCB9854828E4D8562497DC@CDSMSX102.ccr.corp.intel.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D856249C55@CDSMSX102.ccr.corp.intel.com> Amazing, Angie! I succeed install panko with this command now. helm install starlingx/panko --values aa.yaml -n osh-openstack-panko --namespace openstack controller-0:/var/lib/kubelet$ kubectl get jobs -n openstack | grep panko panko-db-init 1/1 50s 100s panko-db-sync 1/1 67s 100s panko-ks-endpoints 1/1 67s 100s panko-ks-service 1/1 48s 100s panko-ks-user 1/1 56s 100s I create aa.yaml with excerpt value in armada manifest file and generated override file for chart panko, and add this field. conf: panko: database: event_time_to_live: "24568998" So for LP1828056 fix solution, currently system helm-override-update has only -set . I will add -set-string option. https://bugs.launchpad.net/starlingx/+bug/1828056 Thanks Martin, Chen SSP, Software Engineer 021-61164330 From: Wang, Jing (Angie) [mailto:Angie.Wang at windriver.com] Sent: Wednesday, July 17, 2019 7:24 AM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: RE: question about LP1828056 Hi Marin, In the helm chart values.yaml , if the value has only numbers(at least 8), helm will cast it to float64. We need to add single/double quotes to the overrided value(if it contains only numbers) to force it to be a string, then helm would be able to parse it correctly. Ie... system helm-override-update -value panko.yaml cat panko.yaml conf: panko: database: event_time_to_live: "555555555555" But I cannot override it with quotes added via system helm-override-update with --set option. The openstack-helm LP you created https://bugs.launchpad.net/openstack-helm/+bug/1836744 is invalid. This is not related to openstack-helm. There had couples of helm issues regarding to the float number, it fixed in commit https://github.com/helm/helm/pull/3599 So if you need to use -set-string option to override the value(if it contains only numbers and at least 8) Ie. helm install panko -set-string conf.panko.database.event_time_to_live=24828899 Thanks, -Angie From: Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] Sent: July-15-19 10:53 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] question about LP1828056 Hi folks I work on LP1828056, https://bugs.launchpad.net/starlingx/+bug/1828056 Issue description: After system deployed, begin to deploy openstack application $ system application-upload $ system helm-override-update --set conf.panko.database.event_time_to_live=24828899 stx-openstack panko openstack $ system application-apply stx-openstack When deploy chart for osh-openstack-panko, it will deploy fail, with job panko-db-sync. controller-0:~$ kubectl get jobs -n openstack | grep panko panko-db-init 1/1 37s 107m panko-db-sync 0/1 107m 107m panko-events-cleaner-1563196200 0/1 90m 90m panko-ks-endpoints 1/1 67s 107m panko-ks-service 1/1 39s 107m panko-ks-user 1/1 62s 107m check pod log, you will find in panko.con, "event_time_to_live = 2.4828899e+07", but it request a integer for db-sync. /var/lib/kubelet/pods/73b84006-a6ff-11e9-ae87-525400d66765/volumes/kubernetes.io~secret/db-sync-conf/..2019_07_15_12_52_54.229165405/panko.conf event_time_to_live = 2.4828899e+07 In /opt/platform/helm/19.01/stx-openstack/1.0-17/openstack-panko.yaml values: conf: panko: database: event_time_to_live: 24828899 So this is maybe tiller's issue. I find if set event_time_to_live to more than 6 digit value, it will generate a float value in panko.conf. Any idea about how to fix this issue? Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Wed Jul 17 04:23:30 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Wed, 17 Jul 2019 04:23:30 +0000 Subject: [Starlingx-discuss] Procedure to configure rules to enable access to port in controller In-Reply-To: References: <788c244328789972f5fb1e92c27953c5de943bab.camel@intel.com> Message-ID: <2A9DA700-6887-465E-B8F2-83A242166522@intel.com> Thanks Mingyuan. It turns out that I was having problems with the proxy. I was using: $ NO_PROXY=10.10.10.3 curl http://10.10.10.3:9100/metrics Instead of $ no_proxy=10.10.10.3 curl http://10.10.10.3:9100/metrics At the end, only the policy was required to enable the access to the port, the additional iptables command wasn't needed. -Erich On 7/15/19, 8:20 PM, "Qi, Mingyuan" wrote: Erich, The "No route to host" sounds like you don't have the route set to 10.10.10.3 in your external host. Check with "ip route" to see if 10.10.10.0/24 route or default route were set. I suppose you are working on a bare metal system. Mingyuan -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, July 16, 2019 0:58 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Procedure to configure rules to enable access to port in controller Hi, I'm trying to enable a monitoring service in the controller. After some debugging I realize that the access rules are managed by calico. So, I'm trying to apply the following policy: apiVersion: "crd.projectcalico.org/v1" kind: GlobalNetworkPolicy metadata: name: custom-testing-port spec: selector: "has(iftype) && iftype == 'oam'" order: 100 applyOnForward: false types: - Ingress - Egress ingress: - action: Allow ipVersion: 4 protocol: TCP destination: ports: [9100] egress: - action: Allow ipVersion: 4 protocol: TCP And I run this with: $ kubectl apply -f policy.yaml Then I used this command to enable the access to the port. $ sudo iptables -A INPUT -p tcp -m multiport --dports 9100 -m comment --comment "Testing port expose" -j ACCEPT This seems to be the same procedure as described here[0] to enable access to horizon. If I do a `iptables-save | grep 9100` I can see the rules: -A cali-pi-custom-testing-port -p tcp -m comment --comment "cali:htmNJYyMe7qlodKr" -m multiport --dports 9100 -j MARK --set-xmark 0x10000/0x10000 -A cali-pi-custom-testing-port -p tcp -m comment --comment "cali:htmNJYyMe7qlodKr" -m multiport --dports 9100 -j MARK --set-xmark 0x10000/0x10000 -A INPUT -p tcp -m multiport --dports 9100 -m comment --comment "Testing port expose" -j ACCEPT -A cali-pi-custom-testing-port -p tcp -m comment --comment "cali:htmNJYyMe7qlodKr" -m multiport --dports 9100 -j MARK --set-xmark 0x10000/0x10000 However, when I try to run a curl from an external host I get a "A communication error occurred: 'No route to host'" error. If I run this same command from inside the controller I get the data. $ curl http://10.10.10.3:9100/metrics I'm sure I'm missing a step, does anybody knows what I can do next to enable the port access? - [0] https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Generate_the_stx-openstack_application_tarball _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chenjie.xu at intel.com Wed Jul 17 06:59:35 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 17 Jul 2019 06:59:35 +0000 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> Message-ID: Hi Kunpeng, Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: system modify --vswitch_type none Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 5:40 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. And do you have some easy ways to disable dpdk of openvswitch in stx1.0? I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. Thanks Kunpeng On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: Hi Kunpeng, When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: 2 network port without DPDK > VM 2 network port with DPDK > Data Network 1 network port without DPDK > OAM Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 3:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi guys, Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. Below is the StarlingX environment. 1. stx1.0 version, bootimage[1] 2. Simplex deployment 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso Kunpeng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Yang at windriver.com Wed Jul 17 09:11:16 2019 From: Bin.Yang at windriver.com (Yang, Bin) Date: Wed, 17 Jul 2019 09:11:16 +0000 Subject: [Starlingx-discuss] controller-1 failed after unlock Message-ID: Dear StarlingX experts, I am exercising installing starlingx (with latest milestone3 image) with virtualbox following the wiki: https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandard I am able to install/provision controller-0, compute-0, but failed to unlock controller-1, the issue seems to be kubelet.service cannot start due to missing of config file at /var/lib/kubelet/config.yaml : controller-1:~# ls /var/lib/kubelet/ -al total 8 drwxr-xr-x. 2 root root 4096 Jun 21 02:27 . drwxr-xr-x. 60 root root 4096 Jul 16 10:43 .. Could anyone who have insight of this issue shed some lights on this issue ? Thanks in advance Below is the daemon log: 2019-07-17T02:06:13.623 controller-1 systemd[1]: info kubelet.service holdoff time over, scheduling restart. 2019-07-17T02:06:13.623 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-07-17T02:06:13.641 controller-1 systemd[1]: info Stopped Kubernetes Kubelet Server. 2019-07-17T02:06:13.647 controller-1 systemd[1]: info Starting Kubernetes Kubelet Server... 2019-07-17T02:06:13.000 controller-1 root: info /usr/bin/kubelet-cgroup-setup.sh(481470): Nothing to do, already configured: /sys/fs/cgroup/cpuset/k8s-infra. 2019-07-17T02:06:13.664 controller-1 systemd[1]: info Started Kubernetes Kubelet Server. 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info Flag --feature-gates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info F0717 02:06:13.831961 481475 server.go:189] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory 2019-07-17T02:06:13.837 controller-1 systemd[1]: notice kubelet.service: main process exited, code=exited, status=255/n/a 2019-07-17T02:06:13.847 controller-1 systemd[1]: notice Unit kubelet.service entered failed state. 2019-07-17T02:06:13.847 controller-1 systemd[1]: warning kubelet.service failed. Best Regards, Bin Yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jul 17 10:41:09 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 17 Jul 2019 10:41:09 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> Hi Cindy, Thought about this some more, sorry it took me so long to respond further. I agree with splitting out the definitions of release priority/importance (which is subjective) from the technical severity (which is I'd say much less subjective). Do we agree that one of the key next steps is to define the severity levels for defects in different domains? Once we have those agreed and written down somewhere, they can be used as guidance for people that are opening Launchpads, and for those that screen them. Someone will note that some bugs cross domains, so it's not as simple as looking at one set of severity definitions, but let's cross that bridge next. Then, if we've got general alignment on the severity definitions per domain, we can sort out what to use as a QRC formula for a release, I think. Btw, it'd be nice if Launchpad had a field for Severity, so we could track that more easily - does anybody know if we can just request this & get it added as a custom field? Bill... -----Original Message----- From: Xie, Cindy Sent: Wednesday, July 10, 2019 7:13 PM To: Zvonar, Bill ; starlingx-discuss at lists.starlingx.io; Khalil, Ghada Subject: RE: bug severity and priority Bill, I definitely agree that not all Medium shall be pushed to stx.3.0, this needs to be assessed carefully. But if we combine the severity and priority together, then this decision needs to put resource factor in consideration as well. Actually, I think it's confusing of calling individual LP "gating" - I understand that we want to get the product quality to a good shape and want to get bugs fixed as many as possible before we ship it. I will suggest to use defects# as part of release criteria (QRC). Example could be: Number of Critical P1 defects Zero Number of High P2 defects < x Number of Medium P3 defects < y And the only thing we need to agree on is the "x" and "y". It makes TSC or release team to make decision easier. The QRC needs to be agreed earlier instead of right before the release decision shall be made. This way, we can really direct our engineering resource working on the most important items and we all have an agreed common goal. Thanks. - cindy -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Thursday, July 11, 2019 1:39 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io; Khalil, Ghada Subject: RE: bug severity and priority Hi Cindy, Thanks for sending this, I think this gives us something to start the discussion. However we decide to align on severity/priority (I'll comment on that more later, need to think about it more), I think we need to be careful before we move all mediums to 3.0, it may be too much of a Gordian knot solution. I think we need to assess the mediums (as Yong suggested earlier) to say why they should or should not be in 2.0. I also think this may help us sort out what our gating criteria are. Bill... -----Original Message----- From: Xie, Cindy Sent: Wednesday, July 10, 2019 10:42 AM To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill ; Khalil, Ghada Subject: bug severity and priority Bill/Ghada, I am sending out my definition of bug severity and priority: Bug Exposure or Severity Definition 1- Critical Product or key feature is not usable for intended purpose. 2- High Product or key feature is not reliably usable for intended purpose or use is significantly impaired 3 - Medium Product or key feature is usable provided by a workaround 4 - Low Tolerable impact to user experience with minimal service and support costs Bug Priority Definition P1 - Stopper Resolution of this defect takes precedence over other defects and most other development activities. This level is used to focus maximum development team resources to resolve a defect in the shortest possible timeframe. P2 - High Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. The urgency to fix a P2 priority defect is imminent. - P2 priority defects are intended to be resolved by the next planned external release of the software. P3 - Medium Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. - P3 priority defects must have a planned timeframe for a verified resolution. P4 - Low Resolution of the defect has least urgency to resolve, P4 priority defects may or may not have plans to resolve. Let's discuss this and agree how we'd like to use them. My suggestion for current "Medium" is to we can mark them as "stx.3.0" and then in the beginning of stx.3, they can move Priority to "high" due to the fact they want to get them fixed in 3.0. But the bug severity should never change because they are standard. Thx. - cindy From fungi at yuggoth.org Wed Jul 17 11:50:00 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 17 Jul 2019 11:50:00 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> Message-ID: <20190717114959.jp66ie3yzvdswqsz@yuggoth.org> On 2019-07-17 10:41:09 +0000 (+0000), Zvonar, Bill wrote: [...] > Btw, it'd be nice if Launchpad had a field for Severity, so we > could track that more easily - does anybody know if we can just > request this & get it added as a custom field? [...] The data model for Launchpad hasn't changed in many years and I gather its codebase has very few remaining maintainers at Canonical these days. In the past we've had some limited luck requesting new methods added to the LP API, so I suppose it can't hurt to ask. You could file such a feature request here: https://bugs.launchpad.net/launchpad/+filebug An alternative (which they are just as likely to recommend, if they respond to your request) is to use bugtags for this purpose. Those can have whatever names you want and you can still query based on them. It's worth noting that the subjective and non-project-agnostic nature of severity/importance is what led StoryBoard to not include fields for them, opting instead for a combination of extensible worklists (where relative priority can be indicated by manually ranking different stories, and you can have multiple worklists which rank them differently according to a variety of arbitrarily-chosen metrics or individual opinions) and storytags (which can have whatever names you want and are also usable to populate automatic worklists and board lanes). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bin.yang at intel.com Wed Jul 17 13:27:47 2019 From: bin.yang at intel.com (Yang, Bin) Date: Wed, 17 Jul 2019 21:27:47 +0800 Subject: [Starlingx-discuss] controller-1 failed after unlock In-Reply-To: References: Message-ID: <20190717132747.GA10004@desktop-xfce4> Hi Bin, This file is generated by kubeadm. And kubeadm is executed by puppet. Normally, you should find below log in /var/log/puppet/: Executing: 'kubeadm init --config=/etc/kubernetes/kubeadm.yaml' ... /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" ... If kubeadm does not run properly, you should find some error log. thanks, Bin On Wed, Jul 17, 2019 at 09:11:16AM +0000, Yang, Bin wrote: > Dear StarlingX experts, > > > > I am exercising installing starlingx (with latest milestone3 image) with > virtualbox following the wiki: > https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandard > > > > I am able to install/provision controller-0, compute-0, but failed to > unlock controller-1, the issue seems to be kubelet.service cannot start > due to missing of config file at /var/lib/kubelet/config.yaml : > > > > controller-1:~# ls /var/lib/kubelet/ -al > > total 8 > > drwxr-xr-x. 2 root root 4096 Jun 21 02:27 . > > drwxr-xr-x. 60 root root 4096 Jul 16 10:43 .. > > > > > > Could anyone who have insight of this issue shed some lights on this issue > ? Thanks in advance > > > > Below is the daemon log: > > > > 2019-07-17T02:06:13.623 controller-1 systemd[1]: info kubelet.service > holdoff time over, scheduling restart. > > 2019-07-17T02:06:13.623 controller-1 systemd[1]: warning Cannot add > dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. > > 2019-07-17T02:06:13.641 controller-1 systemd[1]: info Stopped Kubernetes > Kubelet Server. > > 2019-07-17T02:06:13.647 controller-1 systemd[1]: info Starting Kubernetes > Kubelet Server... > > 2019-07-17T02:06:13.000 controller-1 root: info > /usr/bin/kubelet-cgroup-setup.sh(481470): Nothing to do, already > configured: /sys/fs/cgroup/cpuset/k8s-infra. > > 2019-07-17T02:06:13.664 controller-1 systemd[1]: info Started Kubernetes > Kubelet Server. > > 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info Flag > --feature-gates has been deprecated, This parameter should be set via the > config file specified by the Kubelet's --config flag. See > https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ > for more information. > > 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info Flag > --cpu-manager-policy has been deprecated, This parameter should be set via > the config file specified by the Kubelet's --config flag. See > https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ > for more information. > > 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info F0717 > 02:06:13.831961 481475 server.go:189] failed to load Kubelet config file > /var/lib/kubelet/config.yaml, error failed to read kubelet config file > "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: > no such file or directory > > 2019-07-17T02:06:13.837 controller-1 systemd[1]: notice kubelet.service: > main process exited, code=exited, status=255/n/a > > 2019-07-17T02:06:13.847 controller-1 systemd[1]: notice Unit > kubelet.service entered failed state. > > 2019-07-17T02:06:13.847 controller-1 systemd[1]: warning kubelet.service > failed. > > > > > > > > > > Best Regards, > > Bin Yang > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Jul 17 13:50:41 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 17 Jul 2019 13:50:41 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 7/17 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE2297@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 7/17 meeting: - continue Python2to3 plan review (Austin) py_file rpm_name python3 status and action who is using 2 openvswitch 4-Risk only include minor changes, can ignore 1 python-cephfs 5-low risk ceph, only include minor changes, can ignore 178 python-smartpm 4-Risk this standalone package. may not be used 1 qemu-kvm-ev 2-upgrade "qemu-kvm-ev── mtce-compute── mtce-control", only include minor changes, can ignore 30 requests-toolbelt 2-upgrade cgcs-patch-controller 2 rpm-python 4-Risk "rpm-python── createrepo── cgcs-patch-controller── python-smartpm── yum── createrepo── cgcs-patch-controller─ yum-plugin-fastestmirror" 3 stories created: - https://storyboard.openstack.org/#!/story/2006227: for python-smartpm, propose to use yum and now assigned to Don (may effect the update capability). - https://storyboard.openstack.org/#!/story/2006228: for rpm_python, propose to use version_utils and now assigned to Don. - https://storyboard.openstack.org/#!/story/2006158 created to track the remaining items. Austin is looking into requests-toolbelt. Python3 version is not supported in CentOS 7.6. Option#1: if we want to include Python3 runtime cut-over in stx.3, then we have to upgrade CentOS 8.0; Option#2: stay with CentOS 7.6, then we will NOT have Python3 runtime cut over in stx.3.0. => agreed on Option#2. - kernel minor version upgrade to kernel-3.10.0-957.21.3 (Haitao/Shuai) LP#1836685 to address CVE-2019-11477, pending security team approval. 3 patches submitted and please review. upversion from 12.2 to 21.3. Concern regarding to the timing and techncial risk associated with kernel upgrade. Saul: 8 patches in rpm but maybe there is more changes. AR: Cindy to send email to mailing list and consult Ken about the CVE. AR: Saul to follow up and review the changes in 957.21.3 vs 957.12.2, release notes and the actual delta btw the two versions. Option#1: upgrade the kernel 21.3 in the master only; Option#2: only cherry pick the security patch to address CVE-11477. AR: Shuai to provide test report for both option next week. we will make decision in mailing list based on the data from Saul and Shuai. - Ceph containerization plan review (Tingjie) Spec under review: https://review.opendev.org/#/c/656371/, design doc: https://docs.google.com/document/d/1lnAZSu4vAD4EB62Mk18mCgM7aAItl26 Start the implementation based on Spec. AR: Tingjie to attend tomorrow's TSC meeting for spec review. - stx 2.0 bug triage/review (Cindy) - stx.storage: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage 1830191: the fix got reverted and Ovidiu is working on a new fix; 1833738: Bob working on fix and should get this in for stx.2.0 1827119: ready for review again 1831635: Liang needs test cases from WR to reproduce the failure and can login to the system for live debug. - stx.distro.others: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other 1814345 : re-opened yesterday, Zhipeng already has a new patch uploaded and under review. - opens (all) Bill: bug gating criteria? defer the discussion in the community call. -----Original Message----- From: Xie, Cindy Sent: Tuesday, July 16, 2019 8:45 PM To: 'starlingx-discuss at lists.starlingx.io' ; 'Rowsell, Brent' ; Wold, Saul Subject: Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/17 All, Below are the agenda I proposed, please feel free to add more: Agenda for 7/17 meeting: - continue Python2to3 plan review (Austin) - kernel minor version upgrade to kernel-3.10.0-957.21.3 (Haitao/Shuai) - Ceph containerization plan review (Tingjie) - stx 2.0 bug triage/review (Cindy) - opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 17, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236  Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM)  Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ  Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Jerry.Sun at windriver.com Wed Jul 17 14:32:44 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Wed, 17 Jul 2019 14:32:44 +0000 Subject: [Starlingx-discuss] [docs] Controller Docker Registry Message-ID: Hi Docs Team, I don't think I ever sent an email here about the Docker registry on the controller. This is for story 2002840 A docker registry is now deployed on the management network. You can interact with the registry through the address "registry.local:9001". For example, "docker login registry.local:9001" Please refer to upstream docker registry documentation for full list of commands. The docker registry deployed on the controllers are now authenticated with the same credentials as the platform keystone. Before pushing and pulling images, please run "docker login registry.local:9001" with your platform keystone credentials. The authorization rules for the docker registry is that admin can perform any action while regular users can only interact with their own repo. For example, only "admin" and "testuser" can push/pull to registry.local:9001/testuser/busybox:latest. Note: an unfortunate side effect of tieing docker registry auth to keystone is that there needs to be a keystone user in all lowercase to be used with the registry. Docker registry does not allow repos to be in caps (registry.local:9001/TESTUSER/busybox:latest) so the user might need to create a separate keystone user in all lower case. The command "system certificate-install" now supports updating the certificate used by all docker registry communication. This is done through the docker_registry mode (system certificate-install -m/--mode docker_registry path_to_cert) Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Jul 17 14:46:20 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 17 Jul 2019 14:46:20 +0000 Subject: [Starlingx-discuss] [docs] Controller Docker Registry In-Reply-To: References: Message-ID: <3808363B39586544A6839C76CF81445EA1B95F6C@ORSMSX104.amr.corp.intel.com> Thanks for this update Jerry. I’ll add this to our docs team agenda today. -- Mike From: Sun, Yicheng (Jerry) [mailto:Jerry.Sun at windriver.com] Sent: Wednesday, July 17, 2019 8:33 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [docs] Controller Docker Registry Hi Docs Team, I don’t think I ever sent an email here about the Docker registry on the controller. This is for story 2002840 A docker registry is now deployed on the management network. You can interact with the registry through the address "registry.local:9001". For example, "docker login registry.local:9001" Please refer to upstream docker registry documentation for full list of commands. The docker registry deployed on the controllers are now authenticated with the same credentials as the platform keystone. Before pushing and pulling images, please run "docker login registry.local:9001" with your platform keystone credentials. The authorization rules for the docker registry is that admin can perform any action while regular users can only interact with their own repo. For example, only "admin" and "testuser" can push/pull to registry.local:9001/testuser/busybox:latest. Note: an unfortunate side effect of tieing docker registry auth to keystone is that there needs to be a keystone user in all lowercase to be used with the registry. Docker registry does not allow repos to be in caps (registry.local:9001/TESTUSER/busybox:latest) so the user might need to create a separate keystone user in all lower case. The command "system certificate-install" now supports updating the certificate used by all docker registry communication. This is done through the docker_registry mode (system certificate-install -m/--mode docker_registry path_to_cert) Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jul 17 15:00:23 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 17 Jul 2019 11:00:23 -0400 Subject: [Starlingx-discuss] Add StarlingX docs to our manifest Message-ID: <4bf4d6fb-4707-ae6c-0ef6-62e63f9fe4fa@windriver.com> Goal:    Designers and testers need a copy of the documentation that is contemporary with the code being worked on or tested. Assumptions:    A git clone of starlingx/docs can be read as is, or can be easily transformed into a readable form. Solution: 1) Add starlingx/docs to the repo manifest.     - When a designer pulls new software, he/she also pulls new documentation at the same time.     - If a designer elects to work with a snapshot of the code for an extended period without pulling software updates, he/she continues to have access to the documentation of the same vintage.     - This facilitates concurrent update of software and docs.     - Does the designer need to read raw rst, or can he transform it to something he can use a web browser on to facilitate navigation. 2a) CENGN build publishes a versioned link to docs that matches the build.     - In raw rst form, it might look like https://review.opendev.org/gitweb?p=starlingx/docs.git;a=tree;hb=     - Can we publish a similar link to docs.starlingx.io to get a formatted web page matching a specific sha? 2b) Can and should CENGN run the software that transforms the the docs git into web form?     - Assumes internal links can be version qualified, or must be relative.     - Feels like CENGN would be displacing docs.starlingx.io From Bill.Zvonar at windriver.com Wed Jul 17 15:42:19 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 17 Jul 2019 15:42:19 +0000 Subject: [Starlingx-discuss] Community Call (July 17, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AAFCFE@ALA-MBD.corp.ad.wrs.com> >From today's call... - sanity - any red sanities since last Community meeting? - nothing this week - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.sanity - reviews in need of attention - nothing this week - documentation update (Mike Tullis) - wiki cleanup, doc updates, 2.0 plan - mega spec - there are stories for all the 2.0 gating work - many are closed, some left to do - doc team committing to get those done - - wiki updates - removing conflicts - see https://docs.google.com/spreadsheets/d/1UJjUttsWQRyauATrip0wKGIxSO7DvyetDKmPwvEIDaA/edit#gid=0 - focus is now on rows 57-59 in this sheet - Not Planned == Not Applicable (nothing to move from Wiki to Docs) - once they've completed moving wiki content to docs, they plan to delete the wiki page and leave a pointer to the doc - how to track per-build doc changes (say, for installation) - Action: Scott to summarize his view on using the manifest file for this ; Mike to discuss in today's doc meeting - defect trend / gating launchpads - severity vs. importance - severity - Critical (not usable), Major (usable w/ reduced functionality), Minor (usable w/ minor issues) - not query-able in Launchpad, it doesn't have a field - it's a 'suggestion' from the Launchpad creator - gating-ness - based on judgement of the TL/PL for the Launchpad - each sub-team should have the authority to use their judgement to set the gating-ness of their Launchpads - logistics - how frequently do the sub-teams 'gate' their Launchpads - at least once a week - sub-team leaders need to scrub their Mediums and fixing the ones that are truly gating - the part of how we tag issues still needs a bit of alignment, but let's focus on getting stuff fixed - stx.3.0 Milestone-1 - Milestone Criteria: - Release priorities and major features defined. - High level resourcing secured - Candidate list (based on TSC and Project Lead reviews) is available at: - https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 - Additional features/specs can still be proposed/reviewed/added to the release. - The next milestone is spec freeze which is currently scheduled for the week of August 12. At that point, no new specs will be considered for stx.3.0. - Yong to add some additional items to the list - we agreed to wait til after these have been presented at tomorrow's TSC before declaring Milestone-1 - didn't get to these... next time - bitergia update: see https://etherpad.openstack.org/p/stx-bitergia - first contact update - mailing list responsiveness - see https://etherpad.openstack.org/p/stx-first-contact (at the bottom) - open actions from previous meetings - updates pending: - ACTION: Cindy to send her perspective on "severity" to the mailing list, let the discussion ensue - ACTION: Yong to propose how we could formalize the process of assessing the impact of a bug from different perspectives - ACTION: doc team to do an audit of the wikis to find pages that have stale data and/or isn't properly pointing to the docs site - ACTION: release team make the recommendation re: Blueprints for Backlog in the next TSC meeting - pending - ACTION: Bill start checking if any 'new' people emails are going unresponded - see update here (at bottom): https://etherpad.openstack.org/p/stx-first-contact - ACTION: Dean find out what our options for increasing per mail size limit - ACTION: Bill follow up on status of bitergia changes - see Thierry's updates here: https://etherpad.openstack.org/p/stx-bitergia - for later: - ACTION: Frank update on the forecast for the Docker image list - see https://bugs.launchpad.net/starlingx/+bug/1834504, with build team now - ACTION: Frank to talk to CENGN about getting sufficient space (pending any other parameters from Scott) - ACTION: Scott & Dean to talk about the mechanics for big files - pending Frank's discussions with CENGN - ACTION: Numan & Ada to sort out how aggregate regression reporting will be done (manual & automated) - they have booked a meeting to discuss - ACTION: Numan/Yang arrange an automation framweork info session for the Community (in a few weeks after Yang's vacation - ACTION: Bill check with Ian about the logistics/timing of a mid-cycle meeting -----Original Message----- From: Zvonar, Bill Sent: Tuesday, July 16, 2019 8:40 AM To: starlingx-discuss at lists.starlingx.io Subject: Community Call (July 17, 2019) Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - sanity - any red sanities since last Community meeting? - reviews in need of attention - defect trend / gating launchpads - bitergia update: see https://etherpad.openstack.org/p/stx-bitergia - first contact update - mailing list responsiveness - see https://etherpad.openstack.org/p/stx-first-contact (at the bottom) - open actions from previous meetings Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190717T1400 From scott.little at windriver.com Wed Jul 17 15:44:59 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 17 Jul 2019 11:44:59 -0400 Subject: [Starlingx-discuss] Build Layering and refactoring of repos In-Reply-To: <0f8ba5a5-c2ef-520d-6a9f-3fc4a07b02ca@linux.intel.com> References: <0f8ba5a5-c2ef-520d-6a9f-3fc4a07b02ca@linux.intel.com> Message-ID: <0a151b9c-4f22-f48d-abd1-1c80c71a5136@windriver.com> On 2019-07-16 4:42 p.m., Saul Wold wrote: > > Hi Scott, > > I was reviewing the google spreadsheet [0] you shared during the Build > Sub-team [1] meeting last week. I think this is a good direction and I > am beginning to understand your logic and urgency around making the > changes. I have some comments some of the moves. > > 1) Did you factor in any of Dean's thoughts about reorgs? >    email [2] / ethercalc [3] No, I had missed that one.  Was rather busy with containerized builds back then. They all seem like good goals, but orthagonal to partitioning the build into Distro/Flock.  I think they should be persued as independent projects.  The tool I am developing to assist in the proposed movements might aid those projects as well. > > 2) can we remove the stx- prefix from the new repos to start with > instead of propagating that given we are inside the starlingx/ > namespace alread? We should be able to do this concurrently if desired.  I like the idea of getting both high impact changes done at the same time.  My only concern is that this expands the scope to additional high churn repos. > > 3) not sure if "compile" is right name the layer of packages (go, > python, rpm, and bash), does bash really belong here, I don't think we > depend on it for the build, do we ? Is there a specific modification > to bash that build specific? Virtually everything depends on bash directly or indirectly through other build tools. > > 4) openstack-helm* I believe is used by stx-platform-helm, at least we > saw that dependency with the MultiOS/openSUSE specfiles. Agree > > 5) Maybe a future move is getting integ/puppet into the toplevel > puppet repo and ultimately part of ansible-playbooks if the plan is to > convert to ansible. I'd like Don's input on this. > > Thanks >    Sau! > > > [0] > https://docs.google.com/spreadsheets/d/1zURL1UlDST8lnvw3dMlNWN6pkLX6EVF6TDBwNR9TQik/edit#gid=1697053891 > [1] https://etherpad.openstack.org/p/stx-build > [2] > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-May/004597.html > [3] https://ethercalc.openstack.org/stx-repo-org From fungi at yuggoth.org Wed Jul 17 15:51:09 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 17 Jul 2019 15:51:09 +0000 Subject: [Starlingx-discuss] Community Call (July 17, 2019) In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AAFCFE@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AAFCFE@ALA-MBD.corp.ad.wrs.com> Message-ID: <20190717155108.353jsu2u2klhl3u7@yuggoth.org> On 2019-07-17 15:42:19 +0000 (+0000), Zvonar, Bill wrote: [...] > find out what our options for increasing per mail size limit [...] Since Dean is one of the ML owners, he ought to be able to go to http://lists.starlingx.io/cgi-bin/mailman/admin/starlingx-discuss and set the max_message_size field there to the KB length desired. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From vm.rod25 at gmail.com Wed Jul 17 16:08:43 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 17 Jul 2019 11:08:43 -0500 Subject: [Starlingx-discuss] proposal for Intel GPU K8s device plugin support in StarlingX In-Reply-To: <9BAB5B7CAF57C3459E4636391F1071CE0530F2DA@shsmsx102.ccr.corp.intel.com> References: <9BAB5B7CAF57C3459E4636391F1071CE0530F2DA@shsmsx102.ccr.corp.intel.com> Message-ID: On Mon, Jul 15, 2019 at 3:37 AM An, Ran1 wrote: > > Hi All > > here is the proposal for enabling intel-gpu-plugin on StarlingX , welcome suggestion and advise. > > > > The background: > > As a part of resource management, kubernetes provides a device plugin framework [2] for vendors to advertise their resources to the kubelet since version 1.8. StarlingX has already supported SR-IOV CNI plugins now [3]. > > Intel-gpu-plugins is a device plugin implementation [4] for intel GPU (with driver i915). Users could deploy their pods with Intel GPU resource requests or limits, if intel-gpu-plugins was integrated into StarlingX. > +1 > > > proposal: > > Deploy intel-gpu-plugins as a daemon set with node selector “intelgpu: enabled”. Kubernetes label “intelgpu: enabled” will be set automatically once the node detected supported GPU device. > > Details are shown as follows: > > 1. Build StarlingX plugin docker image based on [5], the implement in starlingx are [6] and [7] > ok > 2. Deploy Intel-gpu-plugins daemon set in tasks “bringup_kubemaster” after kubernetes master has been initialized during ansible bootstrap process. Add value “import_plugins” and value list “kube_plugins” as condition of deploying Intel-gpu-plugins daemon set, so user could determine whether Intel-gpu-plugins would be enabled. Create file “/etc/platform/enabled_kube_plugins” and write list “kube_plugins” into the file after active Intel-gpu-plugin daemon if “import_plugins” is true. Partical Implement is [8] > > 3. Detect supported GPU device with the help of sysinv agent and request to set kubernetes label “intelgpu: enabled” for specific node by calling sysinv conductor rpcapi. Sysinv conductor will check file “/etc/platform/enabled_kube_plugins”, and set kubernetes label if the file is exist and intel-gpu-plugins is in list. Partial implement is [9] > I like the approach and the demo, I just have a question: One question, what kind of workload support the container running in the GPU, do we have to write it in cuda? Do we have some example of source code that will be run inside the container that will run on the GPU ? regards > > > [1] https://storyboard.openstack.org/#!/story/2005937 > > [2] https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ > > https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md > > [3] https://review.opendev.org/#/c/655495/ > > [4] https://github.com/intel/intel-device-plugins-for-kubernetes > > [5] https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/cmd/gpu_plugin/README.md > > [6] https://review.opendev.org/668803 > > [7] https://review.opendev.org/668808 > > [8] https://review.opendev.org/666510 > > [9] https://review.opendev.org/666511 > > > > Thanks > > Ran > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Wed Jul 17 16:25:39 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 17 Jul 2019 18:25:39 +0200 Subject: [Starlingx-discuss] July 21 Deadline: Join OSF to Vote in the 2020 Foundation Board Elections Message-ID: <98F2E074-134B-4798-8C19-27647C6776D4@gmail.com> Hi StarlingX Community, As contributors to StarlingX, a pilot project supported by the OpenStack Foundation (OSF), we want to remind everyone of a benefit of joining the Foundation as an individual member: participating in the annual Foundation Board of Directors election. The OSF board represents all of the projects[1], not just OpenStack, and so it’s important for every community to actively participate, as both voters and candidates, in this election process. If you are not already an OpenStack Foundation member, please consider joining __by July 21__ to have a voice in the 2020 election. It’s free to join and only takes a few minutes to complete the form. Join here: https://www.openstack.org/join/register/?membership-type=foundation Time sensitive! In order to be eligible to vote in the January 2020 Individual Board of Directors election[2] you must join as an Individual Foundation Member no later than __this Sunday, July 21, 2019 at 11:59pm PT__. The OpenStack Foundation Board of Directors provides strategic and financial oversight of Foundation resources and staff. The 24 person Board is composed of directors elected by the Individual Foundation Members (8), directors elected by the Gold Members (8) and directors appointed by the Platinum Members (8). Individual Member Directors are there to represent the Individual Members of the Foundation. These Directors act as the link between the thousands of members of the Foundation and the Board, and are not representing the companies for which they work. Meet the current Board of Directors[3], and learn more here[4]. For further OpenStack Foundation updates and news, please sign up for the Foundation mailing list[5]. You can also select the type of content you receive from the OpenStack Foundation here[6]. Best Regards, Ildikó [1] We’re aware that some of the language about the election on openstack.org is still “openstack project” specific, and we’ll be updating that prior to the 2020 elections to more accurately reflect that the OSF Board represents all OSF projects. Please don’t let this dissuade you from joining, if you’re interested in voting! [2] https://www.openstack.org/election/2020-individual-director-election/ [3] https://www.openstack.org/foundation/board-of-directors/ [4] https://www.openstack.org/foundation/ [5] http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation [6] https://www.openstack.org/community/email-signup From sgw at linux.intel.com Wed Jul 17 16:28:00 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 17 Jul 2019 09:28:00 -0700 Subject: [Starlingx-discuss] Build Layering and refactoring of repos In-Reply-To: <0a151b9c-4f22-f48d-abd1-1c80c71a5136@windriver.com> References: <0f8ba5a5-c2ef-520d-6a9f-3fc4a07b02ca@linux.intel.com> <0a151b9c-4f22-f48d-abd1-1c80c71a5136@windriver.com> Message-ID: <424cc36a-d474-00f0-6bd5-41caa512d37b@linux.intel.com> On 7/17/19 8:44 AM, Scott Little wrote: > On 2019-07-16 4:42 p.m., Saul Wold wrote: >> >> Hi Scott, >> >> I was reviewing the google spreadsheet [0] you shared during the Build >> Sub-team [1] meeting last week. I think this is a good direction and I >> am beginning to understand your logic and urgency around making the >> changes. I have some comments some of the moves. >> >> 1) Did you factor in any of Dean's thoughts about reorgs? >>    email [2] / ethercalc [3] > > No, I had missed that one.  Was rather busy with containerized builds > back then. > > They all seem like good goals, but orthagonal to partitioning the build > into Distro/Flock.  I think they should be persued as independent > projects.  The tool I am developing to assist in the proposed movements > might aid those projects as well. > Fair enough, let's get Dean's input here at least. >> >> 2) can we remove the stx- prefix from the new repos to start with >> instead of propagating that given we are inside the starlingx/ >> namespace alread? > We should be able to do this concurrently if desired.  I like the idea > of getting both high impact changes done at the same time.  My only > concern is that this expands the scope to additional high churn repos. >> >> 3) not sure if "compile" is right name the layer of packages (go, >> python, rpm, and bash), does bash really belong here, I don't think we >> depend on it for the build, do we ? Is there a specific modification >> to bash that build specific? > > Virtually everything depends on bash directly or indirectly through > other build tools. > But why can't we use the system provided bash rather then needing the patched version, what specific patches are needed? I don't then the extra logging is the requirement here. >> >> 4) openstack-helm* I believe is used by stx-platform-helm, at least we >> saw that dependency with the MultiOS/openSUSE specfiles. > Agree >> >> 5) Maybe a future move is getting integ/puppet into the toplevel >> puppet repo and ultimately part of ansible-playbooks if the plan is to >> convert to ansible. > I'd like Don's input on this. Sure. Sau! >> >> Thanks >>    Sau! >> >> >> [0] >> https://docs.google.com/spreadsheets/d/1zURL1UlDST8lnvw3dMlNWN6pkLX6EVF6TDBwNR9TQik/edit#gid=1697053891 >> >> [1] https://etherpad.openstack.org/p/stx-build >> [2] >> http://lists.starlingx.io/pipermail/starlingx-discuss/2019-May/004597.html >> >> [3] https://ethercalc.openstack.org/stx-repo-org > > From scott.little at windriver.com Wed Jul 17 16:38:02 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 17 Jul 2019 12:38:02 -0400 Subject: [Starlingx-discuss] Build Layering and refactoring of repos In-Reply-To: <424cc36a-d474-00f0-6bd5-41caa512d37b@linux.intel.com> References: <0f8ba5a5-c2ef-520d-6a9f-3fc4a07b02ca@linux.intel.com> <0a151b9c-4f22-f48d-abd1-1c80c71a5136@windriver.com> <424cc36a-d474-00f0-6bd5-41caa512d37b@linux.intel.com> Message-ID: On 2019-07-17 12:28 p.m., Saul Wold wrote: >>> >>> 3) not sure if "compile" is right name the layer of packages (go, >>> python, rpm, and bash), does bash really belong here, I don't think >>> we depend on it for the build, do we ? Is there a specific >>> modification to bash that build specific? >> >> Virtually everything depends on bash directly or indirectly through >> other build tools. >> > But why can't we use the system provided bash rather then needing the > patched version, what specific patches are needed?  I don't then the > extra logging is the requirement here. I agreed our current patch should not alter how anything builds in any direct way.  However it does cause royal havoc on the build order.  It's presence as a buildable object creates many dependency loops.  Not only do a lot of packages depend on bash, bash alos depends on a lot of packages.  Build-pkgs can't determine a build order and must guess.  It wastes a lot of time trying to build packages in the wrong order, and failing, before it stumbles onto a good order. Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Wed Jul 17 17:24:05 2019 From: yong.hu at intel.com (Yong Hu) Date: Wed, 17 Jul 2019 10:24:05 -0700 Subject: [Starlingx-discuss] proposal for Intel GPU K8s device plugin support in StarlingX In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CE0530F2DA@shsmsx102.ccr.corp.intel.com> Message-ID: pls see my comments. On 17/07/2019 9:08 AM, Victor Rodriguez wrote: > On Mon, Jul 15, 2019 at 3:37 AM An, Ran1 wrote: >> >> Hi All >> >> here is the proposal for enabling intel-gpu-plugin on StarlingX , welcome suggestion and advise. >> >> >> >> The background: >> >> As a part of resource management, kubernetes provides a device plugin framework [2] for vendors to advertise their resources to the kubelet since version 1.8. StarlingX has already supported SR-IOV CNI plugins now [3]. >> >> Intel-gpu-plugins is a device plugin implementation [4] for intel GPU (with driver i915). Users could deploy their pods with Intel GPU resource requests or limits, if intel-gpu-plugins was integrated into StarlingX. >> > > +1 > >> >> >> proposal: >> >> Deploy intel-gpu-plugins as a daemon set with node selector “intelgpu: enabled”. Kubernetes label “intelgpu: enabled” will be set automatically once the node detected supported GPU device. >> >> Details are shown as follows: >> >> 1. Build StarlingX plugin docker image based on [5], the implement in starlingx are [6] and [7] >> > > ok > >> 2. Deploy Intel-gpu-plugins daemon set in tasks “bringup_kubemaster” after kubernetes master has been initialized during ansible bootstrap process. Add value “import_plugins” and value list “kube_plugins” as condition of deploying Intel-gpu-plugins daemon set, so user could determine whether Intel-gpu-plugins would be enabled. Create file “/etc/platform/enabled_kube_plugins” and write list “kube_plugins” into the file after active Intel-gpu-plugin daemon if “import_plugins” is true. Partical Implement is [8] >> >> 3. Detect supported GPU device with the help of sysinv agent and request to set kubernetes label “intelgpu: enabled” for specific node by calling sysinv conductor rpcapi. Sysinv conductor will check file “/etc/platform/enabled_kube_plugins”, and set kubernetes label if the file is exist and intel-gpu-plugins is in list. Partial implement is [9] >> > > I like the approach and the demo, I just have a question: > > One question, what kind of workload support the container running in > the GPU, do we have to write it in cuda? Do we have some example of > source code that will be run inside the container that will run on the > GPU ? > No, this is Intel GPU and we don't support Cuda APIs. We can use OpenCL. You can find Intel GPU device plugin demos here:https://github.com/intel/intel-device-plugins-for-kubernetes/tree/8310d84f96f9fc7b7487b2dfb9059905638c58fe/demo > regards > >> >> >> [1] https://storyboard.openstack.org/#!/story/2005937 >> >> [2] https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ >> >> https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md >> >> [3] https://review.opendev.org/#/c/655495/ >> >> [4] https://github.com/intel/intel-device-plugins-for-kubernetes >> >> [5] https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/cmd/gpu_plugin/README.md >> >> [6] https://review.opendev.org/668803 >> >> [7] https://review.opendev.org/668808 >> >> [8] https://review.opendev.org/666510 >> >> [9] https://review.opendev.org/666511 >> >> >> >> Thanks >> >> Ran >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From michael.l.tullis at intel.com Wed Jul 17 21:44:24 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 17 Jul 2019 21:44:24 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 7/17/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B976E0@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Jul 17 23:24:10 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 17 Jul 2019 23:24:10 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190717 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-17 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Jul 18 00:57:45 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 18 Jul 2019 00:57:45 +0000 Subject: [Starlingx-discuss] Story for adjust timeout value in chart Message-ID: <9700A18779F35F49AF027300A49E7C76608B567A@SHSMSX105.ccr.corp.intel.com> Hi Bob, I have created a story [0] to track the task that adjust timeout value of armada's wait in openstack-helm chart. I will do it after stx 2.0 release. Thanks. [0]: https://storyboard.openstack.org/#!/story/2006244 Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Yang at windriver.com Thu Jul 18 01:13:12 2019 From: Bin.Yang at windriver.com (Yang, Bin) Date: Thu, 18 Jul 2019 01:13:12 +0000 Subject: [Starlingx-discuss] controller-1 failed after unlock In-Reply-To: <20190717132747.GA10004@desktop-xfce4> References: <20190717132747.GA10004@desktop-xfce4> Message-ID: Hi Bin, Thanks a lot for the help. Indeed there is error there, it seems a timeout issue to pull docker images (again, I have met such issue during ansible-playbook on controller-0, the workaround is to increase the timeout in ansible playbook), any suggestion to workaround this issue? Thanks 2019-07-16T11:07:02.080 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.085 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.091 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.096 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.101 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.105 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.109 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.113 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.117 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.121 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.125 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.130 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.134 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`ESC[0m 2019-07-16T11:07:02.138 ESC[1;31mError: 2019-07-16 11:07:02 +0000 kubeadm init --config=/etc/kubernetes/kubeadm.yaml returned 1 instead of one of [0] BTW, It is very nice to talk to you, we are both 'Yang Bin' :) Best Regards, Bin Yang,    Solution Engineering Team,    Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189 Skype: yangbincs993 -----Original Message----- From: Yang, Bin [mailto:bin.yang at intel.com] Sent: Wednesday, July 17, 2019 9:28 PM To: Yang, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] controller-1 failed after unlock Hi Bin, This file is generated by kubeadm. And kubeadm is executed by puppet. Normally, you should find below log in /var/log/puppet/: Executing: 'kubeadm init --config=/etc/kubernetes/kubeadm.yaml' ... /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" ... If kubeadm does not run properly, you should find some error log. thanks, Bin On Wed, Jul 17, 2019 at 09:11:16AM +0000, Yang, Bin wrote: > Dear StarlingX experts, > > > > I am exercising installing starlingx (with latest milestone3 image) with > virtualbox following the wiki: > https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandard > > > > I am able to install/provision controller-0, compute-0, but failed to > unlock controller-1, the issue seems to be kubelet.service cannot start > due to missing of config file at /var/lib/kubelet/config.yaml : > > > > controller-1:~# ls /var/lib/kubelet/ -al > > total 8 > > drwxr-xr-x. 2 root root 4096 Jun 21 02:27 . > > drwxr-xr-x. 60 root root 4096 Jul 16 10:43 .. > > > > > > Could anyone who have insight of this issue shed some lights on this issue > ? Thanks in advance > > > > Below is the daemon log: > > > > 2019-07-17T02:06:13.623 controller-1 systemd[1]: info kubelet.service > holdoff time over, scheduling restart. > > 2019-07-17T02:06:13.623 controller-1 systemd[1]: warning Cannot add > dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. > > 2019-07-17T02:06:13.641 controller-1 systemd[1]: info Stopped Kubernetes > Kubelet Server. > > 2019-07-17T02:06:13.647 controller-1 systemd[1]: info Starting Kubernetes > Kubelet Server... > > 2019-07-17T02:06:13.000 controller-1 root: info > /usr/bin/kubelet-cgroup-setup.sh(481470): Nothing to do, already > configured: /sys/fs/cgroup/cpuset/k8s-infra. > > 2019-07-17T02:06:13.664 controller-1 systemd[1]: info Started Kubernetes > Kubelet Server. > > 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info Flag > --feature-gates has been deprecated, This parameter should be set via the > config file specified by the Kubelet's --config flag. See > https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ > for more information. > > 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info Flag > --cpu-manager-policy has been deprecated, This parameter should be set via > the config file specified by the Kubelet's --config flag. See > https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ > for more information. > > 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info F0717 > 02:06:13.831961 481475 server.go:189] failed to load Kubelet config file > /var/lib/kubelet/config.yaml, error failed to read kubelet config file > "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: > no such file or directory > > 2019-07-17T02:06:13.837 controller-1 systemd[1]: notice kubelet.service: > main process exited, code=exited, status=255/n/a > > 2019-07-17T02:06:13.847 controller-1 systemd[1]: notice Unit > kubelet.service entered failed state. > > 2019-07-17T02:06:13.847 controller-1 systemd[1]: warning kubelet.service > failed. > > > > > > > > > > Best Regards, > > Bin Yang > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bin.Yang at windriver.com Thu Jul 18 01:15:11 2019 From: Bin.Yang at windriver.com (Yang, Bin) Date: Thu, 18 Jul 2019 01:15:11 +0000 Subject: [Starlingx-discuss] controller-1 failed after unlock References: <20190717132747.GA10004@desktop-xfce4> Message-ID: Hi Bin, Forget my previous question, I think I understand the root cause: the oam network was not provisioned appropriately. I will figure out how to fix that. Thanks Best Regards, Bin Yang,    Solution Engineering Team,    Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189 Skype: yangbincs993 -----Original Message----- From: Yang, Bin Sent: Thursday, July 18, 2019 9:13 AM To: 'Yang, Bin' Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] controller-1 failed after unlock Hi Bin, Thanks a lot for the help. Indeed there is error there, it seems a timeout issue to pull docker images (again, I have met such issue during ansible-playbook on controller-0, the workaround is to increase the timeout in ansible playbook), any suggestion to workaround this issue? Thanks 2019-07-16T11:07:02.080 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.085 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.091 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.096 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.101 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.105 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.109 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.113 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.117 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.121 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.125 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 128.224.230.5:9090: connect: no route to hostESC[0m 2019-07-16T11:07:02.130 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: , error: exit status 1ESC[0m 2019-07-16T11:07:02.134 ESC[mNotice: 2019-07-16 11:07:02 +0000 /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`ESC[0m 2019-07-16T11:07:02.138 ESC[1;31mError: 2019-07-16 11:07:02 +0000 kubeadm init --config=/etc/kubernetes/kubeadm.yaml returned 1 instead of one of [0] BTW, It is very nice to talk to you, we are both 'Yang Bin' :) Best Regards, Bin Yang,    Solution Engineering Team,    Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126    Mobile +86,13811391682    Fax +86,10,64398189 Skype: yangbincs993 -----Original Message----- From: Yang, Bin [mailto:bin.yang at intel.com] Sent: Wednesday, July 17, 2019 9:28 PM To: Yang, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] controller-1 failed after unlock Hi Bin, This file is generated by kubeadm. And kubeadm is executed by puppet. Normally, you should find below log in /var/log/puppet/: Executing: 'kubeadm init --config=/etc/kubernetes/kubeadm.yaml' ... /Stage[main]/Platform::Kubernetes::Master::Init/Exec[configure master node]/returns: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" ... If kubeadm does not run properly, you should find some error log. thanks, Bin On Wed, Jul 17, 2019 at 09:11:16AM +0000, Yang, Bin wrote: > Dear StarlingX experts, > > > > I am exercising installing starlingx (with latest milestone3 image) with > virtualbox following the wiki: > https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandard > > > > I am able to install/provision controller-0, compute-0, but failed to > unlock controller-1, the issue seems to be kubelet.service cannot start > due to missing of config file at /var/lib/kubelet/config.yaml : > > > > controller-1:~# ls /var/lib/kubelet/ -al > > total 8 > > drwxr-xr-x. 2 root root 4096 Jun 21 02:27 . > > drwxr-xr-x. 60 root root 4096 Jul 16 10:43 .. > > > > > > Could anyone who have insight of this issue shed some lights on this issue > ? Thanks in advance > > > > Below is the daemon log: > > > > 2019-07-17T02:06:13.623 controller-1 systemd[1]: info kubelet.service > holdoff time over, scheduling restart. > > 2019-07-17T02:06:13.623 controller-1 systemd[1]: warning Cannot add > dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. > > 2019-07-17T02:06:13.641 controller-1 systemd[1]: info Stopped Kubernetes > Kubelet Server. > > 2019-07-17T02:06:13.647 controller-1 systemd[1]: info Starting Kubernetes > Kubelet Server... > > 2019-07-17T02:06:13.000 controller-1 root: info > /usr/bin/kubelet-cgroup-setup.sh(481470): Nothing to do, already > configured: /sys/fs/cgroup/cpuset/k8s-infra. > > 2019-07-17T02:06:13.664 controller-1 systemd[1]: info Started Kubernetes > Kubelet Server. > > 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info Flag > --feature-gates has been deprecated, This parameter should be set via the > config file specified by the Kubelet's --config flag. See > https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ > for more information. > > 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info Flag > --cpu-manager-policy has been deprecated, This parameter should be set via > the config file specified by the Kubelet's --config flag. See > https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ > for more information. > > 2019-07-17T02:06:13.832 controller-1 kubelet[481475]: info F0717 > 02:06:13.831961 481475 server.go:189] failed to load Kubelet config file > /var/lib/kubelet/config.yaml, error failed to read kubelet config file > "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: > no such file or directory > > 2019-07-17T02:06:13.837 controller-1 systemd[1]: notice kubelet.service: > main process exited, code=exited, status=255/n/a > > 2019-07-17T02:06:13.847 controller-1 systemd[1]: notice Unit > kubelet.service entered failed state. > > 2019-07-17T02:06:13.847 controller-1 systemd[1]: warning kubelet.service > failed. > > > > > > > > > > Best Regards, > > Bin Yang > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhang.kunpeng at 99cloud.net Thu Jul 18 10:09:59 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Thu, 18 Jul 2019 18:09:59 +0800 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> Message-ID: <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Hi Xu,Chenjie I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. Also, it was same when I reboot the VM with 2 SR-IOV VFs. Do you have any ideas to debug this problem? Thanks Kunpeng > On Jul 17, 2019, at 14:59, Xu, Chenjie wrote: > > Hi Kunpeng, > Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. <> > > Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: > system modify --vswitch_type none > > Best Regards, > Xu, Chenjie > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Tuesday, July 16, 2019 5:40 PM > To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > > Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. > And do you have some easy ways to disable dpdk of openvswitch in stx1.0? > I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. > > Thanks > Kunpeng > > On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: > > Hi Kunpeng, > When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. > > Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: > 2 network port without DPDK > VM > 2 network port with DPDK > Data Network > 1 network port without DPDK > OAM > > Best Regards, > Xu, Chenjie > > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Tuesday, July 16, 2019 3:54 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi guys, > > Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. > > Below is the StarlingX environment. > > 1. stx1.0 version, bootimage[1] > 2. Simplex deployment > 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. > 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. > > When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. > > It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. > > I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. > > [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso > > Kunpeng > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Thu Jul 18 10:49:48 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 18 Jul 2019 12:49:48 +0200 (CEST) Subject: [Starlingx-discuss] How to update StarlingX? Message-ID: <1004394953.185994.1563446988991@communicator.strato.com> Hi, at the moment are running our system with a StarlingX version from around mid June. What is the recommended way to update our installation for e.g adding the StarlingX repository to our installation? In the past we where reinstalling everything, which is a real pain. Thanks Marcel From zhang.kunpeng at 99cloud.net Thu Jul 18 11:09:05 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Thu, 18 Jul 2019 19:09:05 +0800 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Message-ID: I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. Before restart ovs: controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever After: controller-0:/home/wrsroot# systemctl restart ovs-vswitchd controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff One of the VMs: > On Jul 18, 2019, at 18:09, 张鲲鹏 wrote: > > Hi Xu,Chenjie > > I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. > Also, it was same when I reboot the VM with 2 SR-IOV VFs. > Do you have any ideas to debug this problem? > > Thanks > Kunpeng > >> On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: >> >> Hi Kunpeng, >> Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. <> >> >> Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: >> system modify --vswitch_type none >> >> Best Regards, >> Xu, Chenjie >> <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] >> Sent: Tuesday, July 16, 2019 5:40 PM >> To: Xu, Chenjie > >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart >> >> Hi Chenjie, >> >> Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. >> And do you have some easy ways to disable dpdk of openvswitch in stx1.0? >> I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. >> >> Thanks >> Kunpeng >> >> On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: >> >> Hi Kunpeng, >> When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. >> >> Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: >> 2 network port without DPDK > VM >> 2 network port with DPDK > Data Network >> 1 network port without DPDK > OAM >> >> Best Regards, >> Xu, Chenjie >> >> From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] >> Sent: Tuesday, July 16, 2019 3:54 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart >> >> Hi guys, >> >> Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. >> >> Below is the StarlingX environment. >> >> 1. stx1.0 version, bootimage[1] >> 2. Simplex deployment >> 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. >> 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. >> >> When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. >> >> It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. >> >> I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. >> >> [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso >> >> Kunpeng >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-8.png Type: image/png Size: 103649 bytes Desc: not available URL: From tingjie.chen at intel.com Thu Jul 18 14:23:35 2019 From: tingjie.chen at intel.com (Chen, Tingjie) Date: Thu, 18 Jul 2019 14:23:35 +0000 Subject: [Starlingx-discuss] FW: Proposal for Ceph containerization for StarlingX References: Message-ID: Hi all, I have file slice to present the proposal, and some updates, feel free to give comments ~ Slice: https://docs.google.com/presentation/d/1VcolrSux-sEBUYcQA06yrEeYx4KM4Ne5wryeYBmSy_o/edit#slide=id.p BP: https://review.opendev.org/#/c/656371/ Design Doc: https://docs.google.com/document/d/1lnAZSu4vAD4EB62Mk18mCgM7aAItl26sJa1GBf9DJz4/edit?usp=sharing Thanks, Tingjie From: Chen, Tingjie Sent: Tuesday, July 16, 2019 1:15 AM To: starlingx-discuss at lists.starlingx.io Subject: Proposal for Ceph containerization for StarlingX Hi, There is proposal for Ceph containerization for StarlingX, welcome review and comments. The background: Ceph is the standard persistent storage backend for starlingx, this story is to implement ceph containerization. In the proposal, we discuss the benefit of containerized Ceph, also give solution and edit design document for the implementation. BP: https://review.opendev.org/#/c/656371 Design doc: https://docs.google.com/document/d/1lnAZSu4vAD4EB62Mk18mCgM7aAItl26sJa1GBf9DJz4/edit?usp=sharing SB: https://storyboard.openstack.org/#!/story/2005527 Thanks, Tingjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Thu Jul 18 14:52:47 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 18 Jul 2019 09:52:47 -0500 Subject: [Starlingx-discuss] proposal for Intel GPU K8s device plugin support in StarlingX In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CE0530F2DA@shsmsx102.ccr.corp.intel.com> Message-ID: On Wed, Jul 17, 2019 at 12:24 PM Yong Hu wrote: > > pls see my comments. > > On 17/07/2019 9:08 AM, Victor Rodriguez wrote: > > On Mon, Jul 15, 2019 at 3:37 AM An, Ran1 wrote: > >> > >> Hi All > >> > >> here is the proposal for enabling intel-gpu-plugin on StarlingX , welcome suggestion and advise. > >> > >> > >> > >> The background: > >> > >> As a part of resource management, kubernetes provides a device plugin framework [2] for vendors to advertise their resources to the kubelet since version 1.8. StarlingX has already supported SR-IOV CNI plugins now [3]. > >> > >> Intel-gpu-plugins is a device plugin implementation [4] for intel GPU (with driver i915). Users could deploy their pods with Intel GPU resource requests or limits, if intel-gpu-plugins was integrated into StarlingX. > >> > > > > +1 > > > >> > >> > >> proposal: > >> > >> Deploy intel-gpu-plugins as a daemon set with node selector “intelgpu: enabled”. Kubernetes label “intelgpu: enabled” will be set automatically once the node detected supported GPU device. > >> > >> Details are shown as follows: > >> > >> 1. Build StarlingX plugin docker image based on [5], the implement in starlingx are [6] and [7] > >> > > > > ok > > > >> 2. Deploy Intel-gpu-plugins daemon set in tasks “bringup_kubemaster” after kubernetes master has been initialized during ansible bootstrap process. Add value “import_plugins” and value list “kube_plugins” as condition of deploying Intel-gpu-plugins daemon set, so user could determine whether Intel-gpu-plugins would be enabled. Create file “/etc/platform/enabled_kube_plugins” and write list “kube_plugins” into the file after active Intel-gpu-plugin daemon if “import_plugins” is true. Partical Implement is [8] > >> > >> 3. Detect supported GPU device with the help of sysinv agent and request to set kubernetes label “intelgpu: enabled” for specific node by calling sysinv conductor rpcapi. Sysinv conductor will check file “/etc/platform/enabled_kube_plugins”, and set kubernetes label if the file is exist and intel-gpu-plugins is in list. Partial implement is [9] > >> > > > > I like the approach and the demo, I just have a question: > > > > One question, what kind of workload support the container running in > > the GPU, do we have to write it in cuda? Do we have some example of > > source code that will be run inside the container that will run on the > > GPU ? > > > No, this is Intel GPU and we don't support Cuda APIs. We can use OpenCL. > You can find Intel GPU device plugin demos > here:https://github.com/intel/intel-device-plugins-for-kubernetes/tree/8310d84f96f9fc7b7487b2dfb9059905638c58fe/demo > Thanks , this clarify my questions :) > > regards > > > >> > >> > >> [1] https://storyboard.openstack.org/#!/story/2005937 > >> > >> [2] https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/device-plugins/ > >> > >> https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/device-plugin.md > >> > >> [3] https://review.opendev.org/#/c/655495/ > >> > >> [4] https://github.com/intel/intel-device-plugins-for-kubernetes > >> > >> [5] https://github.com/intel/intel-device-plugins-for-kubernetes/blob/master/cmd/gpu_plugin/README.md > >> > >> [6] https://review.opendev.org/668803 > >> > >> [7] https://review.opendev.org/668808 > >> > >> [8] https://review.opendev.org/666510 > >> > >> [9] https://review.opendev.org/666511 > >> > >> > >> > >> Thanks > >> > >> Ran > >> > >> > >> > >> _______________________________________________ > >> Starlingx-discuss mailing list > >> Starlingx-discuss at lists.starlingx.io > >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From dtroyer at gmail.com Thu Jul 18 15:55:57 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 18 Jul 2019 10:55:57 -0500 Subject: [Starlingx-discuss] Community Call (July 17, 2019) In-Reply-To: <20190717155108.353jsu2u2klhl3u7@yuggoth.org> References: <586E8B730EA0DA4A9D6A80A10E486BC007AAFCFE@ALA-MBD.corp.ad.wrs.com> <20190717155108.353jsu2u2klhl3u7@yuggoth.org> Message-ID: On Wed, Jul 17, 2019 at 10:52 AM Jeremy Stanley wrote: > Since Dean is one of the ML owners, he ought to be able to go to > http://lists.starlingx.io/cgi-bin/mailman/admin/starlingx-discuss > and set the max_message_size field there to the KB length desired. I plead vacation for not remembering I could do that directly...thanks for the reminder Jeremy. I have doubled the size to 120K. Based on what I have seen in the moderation queue that will cover a large percentage of the message that get queued for size. I am reluctant to remove the setting altogether or to make it really large, this is an archived list and often the cause of the message too big errors is a lack of trimming quotes that certain mail clients encourage with default top-posting; there is no need to preserve a lengthy conversation N times in a thread. dt -- Dean Troyer dtroyer at gmail.com From maria.g.perez.ibarra at intel.com Thu Jul 18 22:27:46 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 18 Jul 2019 22:27:46 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 7/18/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID="20190712T013000Z" from (link) ---------------------------------------------------------------------- Overall Results: Total = 415 Pass = 250 Fail = 8 Blocked = 34 Not Run = 105 Obsolete = 18 Total executed = 292 Pass Rate = 96.89% Formula used : Pass Rate = pass * 100 / (pass + fail) ---------------------------------------------------------------------- Results per Domain: Regression - AIO-SX 25 PASS | 1 Obsolete Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM Regression - HA 2 4 PASS | 1 FAIL Regression - Heat 12 PASS | 1 Obsolete Regression - Horizon 4 PASS Regression - Install and Config 5 PASS Regression - Maintenance 7 PASS | 1 FAIL Regression - Networking 94 PASS | 3 FAIL | 19 BLOCKED | 14 Obsolete Regression - Nova 2 PASS | Regression - Security 34 PASS | 1 FAIL | 6 BLOCKED | 1 Obsolete Regression - Storage Regression - Inventory 29 PASS | 1 FAIL System Test 19 PASS | 1 FAIL | 9 BLOCKED | 1 Obsolete --------------------------------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 System account doesn't block after invalid login attempts https://bugs.launchpad.net/starlingx/+bug/1814345 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 virsh only listing one volume, even though there was an additional volume attached after instantiation https://bugs.launchpad.net/starlingx/+bug/1834194 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 Device UUID is missing when boot up VM with block device https://bugs.launchpad.net/starlingx/+bug/1835282 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 Instance created with a flat network spawns in error state https://bugs.launchpad.net/starlingx/+bug/1835965 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 VM fail to live migrate after evacuation https://bugs.launchpad.net/starlingx/+bug/1836402 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 Total Bugs: 16 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?pli=1#gid=322455033 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Thu Jul 18 22:48:01 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 18 Jul 2019 17:48:01 -0500 Subject: [Starlingx-discuss] Where can we get the list of supported features in STX Message-ID: Hi team I was trying to test the functionality of VM recovery, but according to this bug: https://bugs.launchpad.net/starlingx/+bug/1835591 It seems to be disabled for incoming R2, however, it was enabled before in R1. I was wondering if we have a list of what features support/will support R2. It might be good to have it as part of the standard release notes that many SW project publishes on a new release version. Here an example of a template we could use: https://sourceware.org/ml/libc-announce/2019/msg00000.html https://sourceware.org/glibc/wiki/Release/2.29 What do you think? If we already have a proper release note and a list of the supported features could you please let me know? Thanks Victor R From maria.g.perez.ibarra at intel.com Thu Jul 18 23:19:43 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 18 Jul 2019 23:19:43 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190718 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-18 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Jul 19 03:56:53 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 18 Jul 2019 20:56:53 -0700 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> Message-ID: Folks, As I mentioned in a prior email about a previous project (Yocto Project), we were also time-based (every 6 months). We defined Importance [0] of the bug based on Severity (chosen by submitter) and Priority (assigned during a triage process). We had 5 Priory levels in Bugzilla: High, Medium+, Medium, Low and Undecided, these would map to our Critical, High, Medium, Low and Undecided. This clearly frames it based on Milestones and releases due to the time based nature of the Yocto Project. Notice that the High/Critical is the only one that is truly "gating" or milestone/release blocker, the Medium+, our High, won't block a milestone but be should be fixed for a release, but could be a dot.dot soon after the release. > Importance > The Importance of the bug is defined by its Priority and Severity. The Priority classifies the bug's fixing order. In other words, how soon will it get fixed relative to other bugs? Priorities are set during the bug Triage meeting and cannot be changed by the user. The priority appears to the left of the Severity field. Here are the values that Priority can be set to during the Triage meeting: > > High -- Bug fixing is planned immediately for the target milestone. Milestone cannot be released if there is a high bug opened against the milestone. High priority issues cause major functional loss of a specific feature that is POR for the up-comping milestone. These issues are easily hit by the user and greatly impact the user experience or customer requirements. Finally, these issues could be urgent security fixes that need to be corrected in a prior release. The bug assignee is not to change the target milestones for High bugs without prior approval of the Triage team. > Medium+ -- Bug fixing is planned before the milestone and must be fixed or have a solution planned before the release is finalized. These issues are not show-stoppers but have somewhat significant impact to system functions and user experience. > Medium -- These are important issues we keep track and try to plan fixing for the release. They have limited impact for the system functions and releases. > Low -- Bug fixing is only done opportunistically. Generally not planned for the up-coming project release. Issues that are not a POR feature request, or are hard to reproduce fall into this category. > Undecided -- These issues are newly reported and are undecided before Triage. Issues that are a feature request, which isn't approved for future release yet. This issue will be changed to have an actual Priority after the Triage team approves it. > Note: High impact but Low Priority bugs can be documented in the release notes. > > The Severity indicates how much the issue impacted the person reporting the bug. Severity can be categorized into five areas. > > Critical -- Crashes, hang, loss of data, negative impact to other components, memory leak etc. > Major -- Major loss of functionality of POR. > Normal -- Regular issue, some loss of functionality under certain circumstance. This is the default Severity. > Minor -- Minor loss of functionality, or issues with easy workaround available. > Enhancement -- Request for enhancement or new feature to be worked. I hope the helps by provide a different viewpoint from another project. Sau! [0] https://wiki.yoctoproject.org/wiki/Bugzilla_Configuration_and_Bug_Tracking#Importance On 7/17/19 3:41 AM, Zvonar, Bill wrote: > Hi Cindy, > > Thought about this some more, sorry it took me so long to respond further. > > I agree with splitting out the definitions of release priority/importance (which is subjective) from the technical severity (which is I'd say much less subjective). > > Do we agree that one of the key next steps is to define the severity levels for defects in different domains? > > Once we have those agreed and written down somewhere, they can be used as guidance for people that are opening Launchpads, and for those that screen them. Someone will note that some bugs cross domains, so it's not as simple as looking at one set of severity definitions, but let's cross that bridge next. > > Then, if we've got general alignment on the severity definitions per domain, we can sort out what to use as a QRC formula for a release, I think. > > Btw, it'd be nice if Launchpad had a field for Severity, so we could track that more easily - does anybody know if we can just request this & get it added as a custom field? > > Bill... > > -----Original Message----- > From: Xie, Cindy > Sent: Wednesday, July 10, 2019 7:13 PM > To: Zvonar, Bill ; starlingx-discuss at lists.starlingx.io; Khalil, Ghada > Subject: RE: bug severity and priority > > Bill, > I definitely agree that not all Medium shall be pushed to stx.3.0, this needs to be assessed carefully. But if we combine the severity and priority together, then this decision needs to put resource factor in consideration as well. > > Actually, I think it's confusing of calling individual LP "gating" - I understand that we want to get the product quality to a good shape and want to get bugs fixed as many as possible before we ship it. I will suggest to use defects# as part of release criteria (QRC). Example could be: > > Number of Critical P1 defects Zero > Number of High P2 defects < x > Number of Medium P3 defects < y > > And the only thing we need to agree on is the "x" and "y". It makes TSC or release team to make decision easier. The QRC needs to be agreed earlier instead of right before the release decision shall be made. This way, we can really direct our engineering resource working on the most important items and we all have an agreed common goal. > > Thanks. - cindy > > -----Original Message----- > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > Sent: Thursday, July 11, 2019 1:39 AM > To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io; Khalil, Ghada > Subject: RE: bug severity and priority > > Hi Cindy, > > Thanks for sending this, I think this gives us something to start the discussion. > > However we decide to align on severity/priority (I'll comment on that more later, need to think about it more), I think we need to be careful before we move all mediums to 3.0, it may be too much of a Gordian knot solution. > > I think we need to assess the mediums (as Yong suggested earlier) to say why they should or should not be in 2.0. I also think this may help us sort out what our gating criteria are. > > Bill... > > -----Original Message----- > From: Xie, Cindy > Sent: Wednesday, July 10, 2019 10:42 AM > To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill ; Khalil, Ghada > Subject: bug severity and priority > > Bill/Ghada, > I am sending out my definition of bug severity and priority: > > Bug Exposure or Severity Definition > 1- Critical Product or key feature is not usable for intended purpose. > 2- High Product or key feature is not reliably usable for intended purpose or use is significantly impaired > 3 - Medium Product or key feature is usable provided by a workaround > 4 - Low Tolerable impact to user experience with minimal service and support costs > > Bug Priority Definition > P1 - Stopper Resolution of this defect takes precedence over other defects and most other development activities. This level is used to focus maximum development team resources to resolve a defect in the shortest possible timeframe. > P2 - High Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. The urgency to fix a P2 priority defect is imminent. - P2 priority defects are intended to be resolved by the next planned external release of the software. > P3 - Medium Resolution of the defect has precedence over resolving other defects with lesser classifications of priority. - P3 priority defects must have a planned timeframe for a verified resolution. > P4 - Low Resolution of the defect has least urgency to resolve, P4 priority defects may or may not have plans to resolve. > > Let's discuss this and agree how we'd like to use them. My suggestion for current "Medium" is to we can mark them as "stx.3.0" and then in the beginning of stx.3, they can move Priority to "high" due to the fact they want to get them fixed in 3.0. > > But the bug severity should never change because they are standard. > > Thx. - cindy > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From chenjie.xu at intel.com Fri Jul 19 08:17:46 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Fri, 19 Jul 2019 08:17:46 +0000 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Message-ID: Hi Kunpeng, From the below logs, we can find that 1. ovs agent detects that the OVS is dead. 2. After OVS has been restarted, ovs agent tries to reset bridges and recover ports. 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. Could you please attach the below logs? /var/log/openvswitch/ovs-vswitchd.log /var/log/openvswitch/ovsdb-server.log /var/log/syslog neutron log (the log file is specified in /etc/neutron/neutron.conf) Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Friday, July 19, 2019 10:21 AM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, This is the logs. At UTC 2019:02:04 I restarted the VM. In openstack.log I found some error messages, I don’t know if it’s relevant. 2019-07-19 02:04:16.141 186477 INFO eventlet.wsgi.server [req-6600ca74-1f93-4e54-88c1-35f964f1e055 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0178909 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json]: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:22.077 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.079 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:22.089 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.090 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out: Timeout: 10 seconds 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Failed to communicate with the switch: RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback (most recent call last): 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int flows = self.dump_flows(constants.CANARY_TABLE) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 147, in dump_flows 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int reply_multi=True) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 95, in _send_msg 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int raise RuntimeError(m) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. 2019-07-19 02:04:24.242 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Mapping physical network providernet-a to bridge br-phy0 2019-07-19 02:04:24.295 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Bridge br-phy0 has datapath-ID 0000f8f21e640120 Kunpeng On Jul 19, 2019, at 09:29, Xu, Chenjie > wrote: Hi Kunpeng, You can check the bridge and openflows by the following commands: ovs-vsctl show ovs-ofctl dump-flows br-int ovs-ofctl dump-flows br-phy0 The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs. I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted? /var/log/openvswitch/ovs-vswitchd.log /var/log/syslog neutron log (the log file is specified in /etc/neutron/neutron.conf) Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Thursday, July 18, 2019 7:09 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. Before restart ovs: controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever After: controller-0:/home/wrsroot# systemctl restart ovs-vswitchd controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff One of the VMs: On Jul 18, 2019, at 18:09, 张鲲鹏 > wrote: Hi Xu,Chenjie I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. Also, it was same when I reboot the VM with 2 SR-IOV VFs. Do you have any ideas to debug this problem? Thanks Kunpeng On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: Hi Kunpeng, Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: system modify --vswitch_type none Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 5:40 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. And do you have some easy ways to disable dpdk of openvswitch in stx1.0? I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. Thanks Kunpeng On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: Hi Kunpeng, When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: 2 network port without DPDK > VM 2 network port with DPDK > Data Network 1 network port without DPDK > OAM Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 3:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi guys, Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. Below is the StarlingX environment. 1. stx1.0 version, bootimage[1] 2. Simplex deployment 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso Kunpeng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Fri Jul 19 12:15:58 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Fri, 19 Jul 2019 12:15:58 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2B35DD6E@FMSMSX125.amr.corp.intel.com> > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, July 18, 2019 10:57 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] bug severity and priority > > > Folks, > > As I mentioned in a prior email about a previous project (Yocto Project), we > were also time-based (every 6 months). We defined Importance [0] of the > bug based on Severity (chosen by submitter) and Priority (assigned during a > triage process). We had 5 Priory levels in > Bugzilla: High, Medium+, Medium, Low and Undecided, these would map to > our Critical, High, Medium, Low and Undecided. Those triage meetings were very helpful because they were live discussions about the bugs with all the stockholders. I think we should consider to have a weekly meeting just to triage bugs. Regards, José > > This clearly frames it based on Milestones and releases due to the time > based nature of the Yocto Project. Notice that the High/Critical is the only > one that is truly "gating" or milestone/release blocker, the > Medium+, our High, won't block a milestone but be should be fixed for a > release, but could be a dot.dot soon after the release. > > > Importance > > The Importance of the bug is defined by its Priority and Severity. The > Priority classifies the bug's fixing order. In other words, how soon will it get > fixed relative to other bugs? Priorities are set during the bug Triage meeting > and cannot be changed by the user. The priority appears to the left of the > Severity field. Here are the values that Priority can be set to during the Triage > meeting: > > > > High -- Bug fixing is planned immediately for the target milestone. > Milestone cannot be released if there is a high bug opened against the > milestone. High priority issues cause major functional loss of a specific > feature that is POR for the up-comping milestone. These issues are easily hit > by the user and greatly impact the user experience or customer > requirements. Finally, these issues could be urgent security fixes that need to > be corrected in a prior release. The bug assignee is not to change the target > milestones for High bugs without prior approval of the Triage team. > > Medium+ -- Bug fixing is planned before the milestone and must be fixed or > have a solution planned before the release is finalized. These issues are not > show-stoppers but have somewhat significant impact to system functions > and user experience. > > Medium -- These are important issues we keep track and try to plan fixing > for the release. They have limited impact for the system functions and > releases. > > Low -- Bug fixing is only done opportunistically. Generally not planned for > the up-coming project release. Issues that are not a POR feature request, or > are hard to reproduce fall into this category. > > Undecided -- These issues are newly reported and are undecided before > Triage. Issues that are a feature request, which isn't approved for future > release yet. This issue will be changed to have an actual Priority after the > Triage team approves it. > > Note: High impact but Low Priority bugs can be documented in the release > notes. > > > > The Severity indicates how much the issue impacted the person reporting > the bug. Severity can be categorized into five areas. > > > > Critical -- Crashes, hang, loss of data, negative impact to other components, > memory leak etc. > > Major -- Major loss of functionality of POR. > > Normal -- Regular issue, some loss of functionality under certain > circumstance. This is the default Severity. > > Minor -- Minor loss of functionality, or issues with easy workaround > available. > > Enhancement -- Request for enhancement or new feature to be worked. > > I hope the helps by provide a different viewpoint from another project. > > Sau! > > [0] > https://wiki.yoctoproject.org/wiki/Bugzilla_Configuration_and_Bug_Tracking > #Importance > > On 7/17/19 3:41 AM, Zvonar, Bill wrote: > > Hi Cindy, > > > > Thought about this some more, sorry it took me so long to respond further. > > > > I agree with splitting out the definitions of release priority/importance > (which is subjective) from the technical severity (which is I'd say much less > subjective). > > > > Do we agree that one of the key next steps is to define the severity levels > for defects in different domains? > > > > Once we have those agreed and written down somewhere, they can be > used as guidance for people that are opening Launchpads, and for those that > screen them. Someone will note that some bugs cross domains, so it's not as > simple as looking at one set of severity definitions, but let's cross that bridge > next. > > > > Then, if we've got general alignment on the severity definitions per domain, > we can sort out what to use as a QRC formula for a release, I think. > > > > Btw, it'd be nice if Launchpad had a field for Severity, so we could track that > more easily - does anybody know if we can just request this & get it added as > a custom field? > > > > Bill... > > > > -----Original Message----- > > From: Xie, Cindy > > Sent: Wednesday, July 10, 2019 7:13 PM > > To: Zvonar, Bill ; starlingx- > discuss at lists.starlingx.io; Khalil, Ghada > > Subject: RE: bug severity and priority > > > > Bill, > > I definitely agree that not all Medium shall be pushed to stx.3.0, this needs > to be assessed carefully. But if we combine the severity and priority together, > then this decision needs to put resource factor in consideration as well. > > > > Actually, I think it's confusing of calling individual LP "gating" - I understand > that we want to get the product quality to a good shape and want to get bugs > fixed as many as possible before we ship it. I will suggest to use defects# as > part of release criteria (QRC). Example could be: > > > > Number of Critical P1 defects Zero > > Number of High P2 defects < x > > Number of Medium P3 defects < y > > > > And the only thing we need to agree on is the "x" and "y". It makes TSC or > release team to make decision easier. The QRC needs to be agreed earlier > instead of right before the release decision shall be made. This way, we can > really direct our engineering resource working on the most important items > and we all have an agreed common goal. > > > > Thanks. - cindy > > > > -----Original Message----- > > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > > Sent: Thursday, July 11, 2019 1:39 AM > > To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io; > Khalil, Ghada > > Subject: RE: bug severity and priority > > > > Hi Cindy, > > > > Thanks for sending this, I think this gives us something to start the > discussion. > > > > However we decide to align on severity/priority (I'll comment on that more > later, need to think about it more), I think we need to be careful before we > move all mediums to 3.0, it may be too much of a Gordian knot solution. > > > > I think we need to assess the mediums (as Yong suggested earlier) to say > why they should or should not be in 2.0. I also think this may help us sort > out what our gating criteria are. > > > > Bill... > > > > -----Original Message----- > > From: Xie, Cindy > > Sent: Wednesday, July 10, 2019 10:42 AM > > To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill > ; Khalil, Ghada > > Subject: bug severity and priority > > > > Bill/Ghada, > > I am sending out my definition of bug severity and priority: > > > > Bug Exposure or Severity Definition > > 1- Critical Product or key feature is not usable for intended purpose. > > 2- High Product or key feature is not reliably usable for > intended purpose or use is significantly impaired > > 3 - Medium Product or key feature is usable provided by a workaround > > 4 - Low Tolerable impact to user experience with minimal > service and support costs > > > > Bug Priority Definition > > P1 - Stopper Resolution of this defect takes precedence over other defects > and most other development activities. This level is used to focus maximum > development team resources to resolve a defect in the shortest possible > timeframe. > > P2 - High Resolution of the defect has precedence over resolving other > defects with lesser classifications of priority. The urgency to fix a P2 priority > defect is imminent. - P2 priority defects are intended to be resolved by the > next planned external release of the software. > > P3 - Medium Resolution of the defect has precedence over resolving other > defects with lesser classifications of priority. - P3 priority defects must have a > planned timeframe for a verified resolution. > > P4 - Low Resolution of the defect has least urgency to resolve, P4 > priority defects may or may not have plans to resolve. > > > > Let's discuss this and agree how we'd like to use them. My suggestion for > current "Medium" is to we can mark them as "stx.3.0" and then in the > beginning of stx.3, they can move Priority to "high" due to the fact they want > to get them fixed in 3.0. > > > > But the bug severity should never change because they are standard. > > > > Thx. - cindy > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Fri Jul 19 14:31:29 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 19 Jul 2019 14:31:29 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 7/17 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FE2297@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FE2297@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE52D5@SHSMSX104.ccr.corp.intel.com> Regarding kernel minor version upgrade which we discussed in the meeting, we've reached consensus with Ken Young and other security team members: For the below two options we had: Option#1: upgrade the kernel 21.3 in the master only; Option#2: only cherry pick the security patch to address CVE-11477. Conclusion is that we will stick with Option#1: put the kernel upgrade into master after RC1 branched out; then we continue to do testing on master; if everything goes well, we can cherry pick the patches to release branch. Thanks Zhao Shuai for the good work and please continue the upgrade in master with Workflow -1 for now till RC1. Thx. - cindy -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, July 17, 2019 9:51 PM To: 'starlingx-discuss at lists.starlingx.io' ; 'Rowsell, Brent' ; Wold, Saul Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 7/17 Agenda & Notes for 7/17 meeting: - continue Python2to3 plan review (Austin) py_file rpm_name python3 status and action who is using 2 openvswitch 4-Risk only include minor changes, can ignore 1 python-cephfs 5-low risk ceph, only include minor changes, can ignore 178 python-smartpm 4-Risk this standalone package. may not be used 1 qemu-kvm-ev 2-upgrade "qemu-kvm-ev── mtce-compute── mtce-control", only include minor changes, can ignore 30 requests-toolbelt 2-upgrade cgcs-patch-controller 2 rpm-python 4-Risk "rpm-python── createrepo── cgcs-patch-controller── python-smartpm── yum── createrepo── cgcs-patch-controller─ yum-plugin-fastestmirror" 3 stories created: - https://storyboard.openstack.org/#!/story/2006227: for python-smartpm, propose to use yum and now assigned to Don (may effect the update capability). - https://storyboard.openstack.org/#!/story/2006228: for rpm_python, propose to use version_utils and now assigned to Don. - https://storyboard.openstack.org/#!/story/2006158 created to track the remaining items. Austin is looking into requests-toolbelt. Python3 version is not supported in CentOS 7.6. Option#1: if we want to include Python3 runtime cut-over in stx.3, then we have to upgrade CentOS 8.0; Option#2: stay with CentOS 7.6, then we will NOT have Python3 runtime cut over in stx.3.0. => agreed on Option#2. - kernel minor version upgrade to kernel-3.10.0-957.21.3 (Haitao/Shuai) LP#1836685 to address CVE-2019-11477, pending security team approval. 3 patches submitted and please review. upversion from 12.2 to 21.3. Concern regarding to the timing and techncial risk associated with kernel upgrade. Saul: 8 patches in rpm but maybe there is more changes. AR: Cindy to send email to mailing list and consult Ken about the CVE. AR: Saul to follow up and review the changes in 957.21.3 vs 957.12.2, release notes and the actual delta btw the two versions. Option#1: upgrade the kernel 21.3 in the master only; Option#2: only cherry pick the security patch to address CVE-11477. AR: Shuai to provide test report for both option next week. we will make decision in mailing list based on the data from Saul and Shuai. - Ceph containerization plan review (Tingjie) Spec under review: https://review.opendev.org/#/c/656371/, design doc: https://docs.google.com/document/d/1lnAZSu4vAD4EB62Mk18mCgM7aAItl26 Start the implementation based on Spec. AR: Tingjie to attend tomorrow's TSC meeting for spec review. - stx 2.0 bug triage/review (Cindy) - stx.storage: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage 1830191: the fix got reverted and Ovidiu is working on a new fix; 1833738: Bob working on fix and should get this in for stx.2.0 1827119: ready for review again 1831635: Liang needs test cases from WR to reproduce the failure and can login to the system for live debug. - stx.distro.others: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other 1814345 : re-opened yesterday, Zhipeng already has a new patch uploaded and under review. - opens (all) Bill: bug gating criteria? defer the discussion in the community call. -----Original Message----- From: Xie, Cindy Sent: Tuesday, July 16, 2019 8:45 PM To: 'starlingx-discuss at lists.starlingx.io' ; 'Rowsell, Brent' ; Wold, Saul Subject: Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/17 All, Below are the agenda I proposed, please feel free to add more: Agenda for 7/17 meeting: - continue Python2to3 plan review (Austin) - kernel minor version upgrade to kernel-3.10.0-957.21.3 (Haitao/Shuai) - Ceph containerization plan review (Tingjie) - stx 2.0 bug triage/review (Cindy) - opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 17, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Fri Jul 19 14:34:58 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 19 Jul 2019 14:34:58 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2B35DD6E@FMSMSX125.amr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2B35DD6E@FMSMSX125.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE52F4@SHSMSX104.ccr.corp.intel.com> Jose, Just to clarify: for the weekly bug triage meeting, you only ask to triage the new bugs, right? My concern is about the triage frequency: right now, the new bugs are triaged almost on daily basis, mostly by Ghada by consulting technical expert. If we switch to a triage meeting, now sure how the new LP can be handled timely. But agree that having a triage meeting is a good idea. Thx. - cindy -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: Friday, July 19, 2019 8:16 PM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] bug severity and priority > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, July 18, 2019 10:57 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] bug severity and priority > > > Folks, > > As I mentioned in a prior email about a previous project (Yocto > Project), we were also time-based (every 6 months). We defined > Importance [0] of the bug based on Severity (chosen by submitter) and > Priority (assigned during a triage process). We had 5 Priory levels > in > Bugzilla: High, Medium+, Medium, Low and Undecided, these would map to > our Critical, High, Medium, Low and Undecided. Those triage meetings were very helpful because they were live discussions about the bugs with all the stockholders. I think we should consider to have a weekly meeting just to triage bugs. Regards, José > > This clearly frames it based on Milestones and releases due to the > time based nature of the Yocto Project. Notice that the High/Critical > is the only one that is truly "gating" or milestone/release blocker, > the > Medium+, our High, won't block a milestone but be should be fixed for > Medium+a > release, but could be a dot.dot soon after the release. > > > Importance > > The Importance of the bug is defined by its Priority and Severity. > > The > Priority classifies the bug's fixing order. In other words, how soon > will it get fixed relative to other bugs? Priorities are set during > the bug Triage meeting and cannot be changed by the user. The priority > appears to the left of the Severity field. Here are the values that > Priority can be set to during the Triage > meeting: > > > > High -- Bug fixing is planned immediately for the target milestone. > Milestone cannot be released if there is a high bug opened against the > milestone. High priority issues cause major functional loss of a > specific feature that is POR for the up-comping milestone. These > issues are easily hit by the user and greatly impact the user > experience or customer requirements. Finally, these issues could be > urgent security fixes that need to be corrected in a prior release. > The bug assignee is not to change the target milestones for High bugs without prior approval of the Triage team. > > Medium+ -- Bug fixing is planned before the milestone and must be > > Medium+ fixed or > have a solution planned before the release is finalized. These issues > are not show-stoppers but have somewhat significant impact to system > functions and user experience. > > Medium -- These are important issues we keep track and try to plan > > fixing > for the release. They have limited impact for the system functions and > releases. > > Low -- Bug fixing is only done opportunistically. Generally not > > planned for > the up-coming project release. Issues that are not a POR feature > request, or are hard to reproduce fall into this category. > > Undecided -- These issues are newly reported and are undecided > > before > Triage. Issues that are a feature request, which isn't approved for > future release yet. This issue will be changed to have an actual > Priority after the Triage team approves it. > > Note: High impact but Low Priority bugs can be documented in the > > release > notes. > > > > The Severity indicates how much the issue impacted the person > > reporting > the bug. Severity can be categorized into five areas. > > > > Critical -- Crashes, hang, loss of data, negative impact to other > > components, > memory leak etc. > > Major -- Major loss of functionality of POR. > > Normal -- Regular issue, some loss of functionality under certain > circumstance. This is the default Severity. > > Minor -- Minor loss of functionality, or issues with easy workaround > available. > > Enhancement -- Request for enhancement or new feature to be worked. > > I hope the helps by provide a different viewpoint from another project. > > Sau! > > [0] > https://wiki.yoctoproject.org/wiki/Bugzilla_Configuration_and_Bug_Trac > king > #Importance > > On 7/17/19 3:41 AM, Zvonar, Bill wrote: > > Hi Cindy, > > > > Thought about this some more, sorry it took me so long to respond further. > > > > I agree with splitting out the definitions of release > > priority/importance > (which is subjective) from the technical severity (which is I'd say > much less subjective). > > > > Do we agree that one of the key next steps is to define the severity > > levels > for defects in different domains? > > > > Once we have those agreed and written down somewhere, they can be > used as guidance for people that are opening Launchpads, and for those > that screen them. Someone will note that some bugs cross domains, so > it's not as simple as looking at one set of severity definitions, but > let's cross that bridge next. > > > > Then, if we've got general alignment on the severity definitions per > > domain, > we can sort out what to use as a QRC formula for a release, I think. > > > > Btw, it'd be nice if Launchpad had a field for Severity, so we could > > track that > more easily - does anybody know if we can just request this & get it > added as a custom field? > > > > Bill... > > > > -----Original Message----- > > From: Xie, Cindy > > Sent: Wednesday, July 10, 2019 7:13 PM > > To: Zvonar, Bill ; starlingx- > discuss at lists.starlingx.io; Khalil, Ghada > > Subject: RE: bug severity and priority > > > > Bill, > > I definitely agree that not all Medium shall be pushed to stx.3.0, > > this needs > to be assessed carefully. But if we combine the severity and priority > together, then this decision needs to put resource factor in consideration as well. > > > > Actually, I think it's confusing of calling individual LP "gating" - > > I understand > that we want to get the product quality to a good shape and want to > get bugs fixed as many as possible before we ship it. I will suggest > to use defects# as part of release criteria (QRC). Example could be: > > > > Number of Critical P1 defects Zero > > Number of High P2 defects < x > > Number of Medium P3 defects < y > > > > And the only thing we need to agree on is the "x" and "y". It makes > > TSC or > release team to make decision easier. The QRC needs to be agreed > earlier instead of right before the release decision shall be made. > This way, we can really direct our engineering resource working on the > most important items and we all have an agreed common goal. > > > > Thanks. - cindy > > > > -----Original Message----- > > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > > Sent: Thursday, July 11, 2019 1:39 AM > > To: Xie, Cindy ; > > starlingx-discuss at lists.starlingx.io; > Khalil, Ghada > > Subject: RE: bug severity and priority > > > > Hi Cindy, > > > > Thanks for sending this, I think this gives us something to start > > the > discussion. > > > > However we decide to align on severity/priority (I'll comment on > > that more > later, need to think about it more), I think we need to be careful > before we move all mediums to 3.0, it may be too much of a Gordian knot solution. > > > > I think we need to assess the mediums (as Yong suggested earlier) to > > say > why they should or should not be in 2.0. I also think this may help > us sort out what our gating criteria are. > > > > Bill... > > > > -----Original Message----- > > From: Xie, Cindy > > Sent: Wednesday, July 10, 2019 10:42 AM > > To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill > ; Khalil, Ghada > > > Subject: bug severity and priority > > > > Bill/Ghada, > > I am sending out my definition of bug severity and priority: > > > > Bug Exposure or Severity Definition > > 1- Critical Product or key feature is not usable for intended purpose. > > 2- High Product or key feature is not reliably usable for > intended purpose or use is significantly impaired > > 3 - Medium Product or key feature is usable provided by a workaround > > 4 - Low Tolerable impact to user experience with minimal > service and support costs > > > > Bug Priority Definition > > P1 - Stopper Resolution of this defect takes precedence over other defects > and most other development activities. This level is used to focus > maximum development team resources to resolve a defect in the shortest > possible timeframe. > > P2 - High Resolution of the defect has precedence over resolving other > defects with lesser classifications of priority. The urgency to fix a > P2 priority defect is imminent. - P2 priority defects are intended to > be resolved by the next planned external release of the software. > > P3 - Medium Resolution of the defect has precedence over resolving other > defects with lesser classifications of priority. - P3 priority > defects must have a planned timeframe for a verified resolution. > > P4 - Low Resolution of the defect has least urgency to resolve, P4 > priority defects may or may not have plans to resolve. > > > > Let's discuss this and agree how we'd like to use them. My > > suggestion for > current "Medium" is to we can mark them as "stx.3.0" and then in the > beginning of stx.3, they can move Priority to "high" due to the fact > they want to get them fixed in 3.0. > > > > But the bug severity should never change because they are standard. > > > > Thx. - cindy > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Fri Jul 19 14:35:41 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 19 Jul 2019 07:35:41 -0700 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2B35DD6E@FMSMSX125.amr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2B35DD6E@FMSMSX125.amr.corp.intel.com> Message-ID: On 7/19/19 5:15 AM, Perez Carranza, Jose wrote: >> -----Original Message----- >> From: Saul Wold [mailto:sgw at linux.intel.com] >> Sent: Thursday, July 18, 2019 10:57 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] bug severity and priority >> >> >> Folks, >> >> As I mentioned in a prior email about a previous project (Yocto Project), we >> were also time-based (every 6 months). We defined Importance [0] of the >> bug based on Severity (chosen by submitter) and Priority (assigned during a >> triage process). We had 5 Priory levels in >> Bugzilla: High, Medium+, Medium, Low and Undecided, these would map to >> our Critical, High, Medium, Low and Undecided. > > Those triage meetings were very helpful because they were live discussions about the bugs with all the stockholders. I think we should consider to have a weekly meeting just to triage bugs. > Not sure we need a bug triage meeting, that should be happening in the sub-team meetings. Sau! > Regards, > José >> >> This clearly frames it based on Milestones and releases due to the time >> based nature of the Yocto Project. Notice that the High/Critical is the only >> one that is truly "gating" or milestone/release blocker, the >> Medium+, our High, won't block a milestone but be should be fixed for a >> release, but could be a dot.dot soon after the release. >> >>> Importance >>> The Importance of the bug is defined by its Priority and Severity. The >> Priority classifies the bug's fixing order. In other words, how soon will it get >> fixed relative to other bugs? Priorities are set during the bug Triage meeting >> and cannot be changed by the user. The priority appears to the left of the >> Severity field. Here are the values that Priority can be set to during the Triage >> meeting: >>> >>> High -- Bug fixing is planned immediately for the target milestone. >> Milestone cannot be released if there is a high bug opened against the >> milestone. High priority issues cause major functional loss of a specific >> feature that is POR for the up-comping milestone. These issues are easily hit >> by the user and greatly impact the user experience or customer >> requirements. Finally, these issues could be urgent security fixes that need to >> be corrected in a prior release. The bug assignee is not to change the target >> milestones for High bugs without prior approval of the Triage team. >>> Medium+ -- Bug fixing is planned before the milestone and must be fixed or >> have a solution planned before the release is finalized. These issues are not >> show-stoppers but have somewhat significant impact to system functions >> and user experience. >>> Medium -- These are important issues we keep track and try to plan fixing >> for the release. They have limited impact for the system functions and >> releases. >>> Low -- Bug fixing is only done opportunistically. Generally not planned for >> the up-coming project release. Issues that are not a POR feature request, or >> are hard to reproduce fall into this category. >>> Undecided -- These issues are newly reported and are undecided before >> Triage. Issues that are a feature request, which isn't approved for future >> release yet. This issue will be changed to have an actual Priority after the >> Triage team approves it. >>> Note: High impact but Low Priority bugs can be documented in the release >> notes. >>> >>> The Severity indicates how much the issue impacted the person reporting >> the bug. Severity can be categorized into five areas. >>> >>> Critical -- Crashes, hang, loss of data, negative impact to other components, >> memory leak etc. >>> Major -- Major loss of functionality of POR. >>> Normal -- Regular issue, some loss of functionality under certain >> circumstance. This is the default Severity. >>> Minor -- Minor loss of functionality, or issues with easy workaround >> available. >>> Enhancement -- Request for enhancement or new feature to be worked. >> >> I hope the helps by provide a different viewpoint from another project. >> >> Sau! >> >> [0] >> https://wiki.yoctoproject.org/wiki/Bugzilla_Configuration_and_Bug_Tracking >> #Importance >> >> On 7/17/19 3:41 AM, Zvonar, Bill wrote: >>> Hi Cindy, >>> >>> Thought about this some more, sorry it took me so long to respond further. >>> >>> I agree with splitting out the definitions of release priority/importance >> (which is subjective) from the technical severity (which is I'd say much less >> subjective). >>> >>> Do we agree that one of the key next steps is to define the severity levels >> for defects in different domains? >>> >>> Once we have those agreed and written down somewhere, they can be >> used as guidance for people that are opening Launchpads, and for those that >> screen them. Someone will note that some bugs cross domains, so it's not as >> simple as looking at one set of severity definitions, but let's cross that bridge >> next. >>> >>> Then, if we've got general alignment on the severity definitions per domain, >> we can sort out what to use as a QRC formula for a release, I think. >>> >>> Btw, it'd be nice if Launchpad had a field for Severity, so we could track that >> more easily - does anybody know if we can just request this & get it added as >> a custom field? >>> >>> Bill... >>> >>> -----Original Message----- >>> From: Xie, Cindy >>> Sent: Wednesday, July 10, 2019 7:13 PM >>> To: Zvonar, Bill ; starlingx- >> discuss at lists.starlingx.io; Khalil, Ghada >>> Subject: RE: bug severity and priority >>> >>> Bill, >>> I definitely agree that not all Medium shall be pushed to stx.3.0, this needs >> to be assessed carefully. But if we combine the severity and priority together, >> then this decision needs to put resource factor in consideration as well. >>> >>> Actually, I think it's confusing of calling individual LP "gating" - I understand >> that we want to get the product quality to a good shape and want to get bugs >> fixed as many as possible before we ship it. I will suggest to use defects# as >> part of release criteria (QRC). Example could be: >>> >>> Number of Critical P1 defects Zero >>> Number of High P2 defects < x >>> Number of Medium P3 defects < y >>> >>> And the only thing we need to agree on is the "x" and "y". It makes TSC or >> release team to make decision easier. The QRC needs to be agreed earlier >> instead of right before the release decision shall be made. This way, we can >> really direct our engineering resource working on the most important items >> and we all have an agreed common goal. >>> >>> Thanks. - cindy >>> >>> -----Original Message----- >>> From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] >>> Sent: Thursday, July 11, 2019 1:39 AM >>> To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io; >> Khalil, Ghada >>> Subject: RE: bug severity and priority >>> >>> Hi Cindy, >>> >>> Thanks for sending this, I think this gives us something to start the >> discussion. >>> >>> However we decide to align on severity/priority (I'll comment on that more >> later, need to think about it more), I think we need to be careful before we >> move all mediums to 3.0, it may be too much of a Gordian knot solution. >>> >>> I think we need to assess the mediums (as Yong suggested earlier) to say >> why they should or should not be in 2.0. I also think this may help us sort >> out what our gating criteria are. >>> >>> Bill... >>> >>> -----Original Message----- >>> From: Xie, Cindy >>> Sent: Wednesday, July 10, 2019 10:42 AM >>> To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill >> ; Khalil, Ghada >>> Subject: bug severity and priority >>> >>> Bill/Ghada, >>> I am sending out my definition of bug severity and priority: >>> >>> Bug Exposure or Severity Definition >>> 1- Critical Product or key feature is not usable for intended purpose. >>> 2- High Product or key feature is not reliably usable for >> intended purpose or use is significantly impaired >>> 3 - Medium Product or key feature is usable provided by a workaround >>> 4 - Low Tolerable impact to user experience with minimal >> service and support costs >>> >>> Bug Priority Definition >>> P1 - Stopper Resolution of this defect takes precedence over other defects >> and most other development activities. This level is used to focus maximum >> development team resources to resolve a defect in the shortest possible >> timeframe. >>> P2 - High Resolution of the defect has precedence over resolving other >> defects with lesser classifications of priority. The urgency to fix a P2 priority >> defect is imminent. - P2 priority defects are intended to be resolved by the >> next planned external release of the software. >>> P3 - Medium Resolution of the defect has precedence over resolving other >> defects with lesser classifications of priority. - P3 priority defects must have a >> planned timeframe for a verified resolution. >>> P4 - Low Resolution of the defect has least urgency to resolve, P4 >> priority defects may or may not have plans to resolve. >>> >>> Let's discuss this and agree how we'd like to use them. My suggestion for >> current "Medium" is to we can mark them as "stx.3.0" and then in the >> beginning of stx.3, they can move Priority to "high" due to the fact they want >> to get them fixed in 3.0. >>> >>> But the bug severity should never change because they are standard. >>> >>> Thx. - cindy >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From jose.perez.carranza at intel.com Fri Jul 19 15:01:39 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Fri, 19 Jul 2019 15:01:39 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FE52F4@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2B35DD6E@FMSMSX125.amr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FE52F4@SHSMSX104.ccr.corp.intel.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2B35DDE5@FMSMSX125.amr.corp.intel.com> > -----Original Message----- > From: Xie, Cindy > Sent: Friday, July 19, 2019 9:35 AM > To: Perez Carranza, Jose ; Saul Wold > ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] bug severity and priority > > Jose, > Just to clarify: for the weekly bug triage meeting, you only ask to triage the > new bugs, right? Yes, only the new ones should be triaged. > > My concern is about the triage frequency: right now, the new bugs are > triaged almost on daily basis, mostly by Ghada by consulting technical expert. > If we switch to a triage meeting, now sure how the new LP can be handled > timely. > > But agree that having a triage meeting is a good idea. > Thx. - cindy > To mitigate this concern as Saul pointed out we should ensure to have a "triage section"section on subproject meeting but ensuring all the stakeholders for the specific bugs are online to provide feedback. > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: Friday, July 19, 2019 8:16 PM > To: Saul Wold ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] bug severity and priority > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Thursday, July 18, 2019 10:57 PM > > To: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] bug severity and priority > > > > > > Folks, > > > > As I mentioned in a prior email about a previous project (Yocto > > Project), we were also time-based (every 6 months). We defined > > Importance [0] of the bug based on Severity (chosen by submitter) and > > Priority (assigned during a triage process). We had 5 Priory levels > > in > > Bugzilla: High, Medium+, Medium, Low and Undecided, these would map > to > > our Critical, High, Medium, Low and Undecided. > > Those triage meetings were very helpful because they were live discussions > about the bugs with all the stockholders. I think we should consider to have a > weekly meeting just to triage bugs. > > Regards, > José > > > > This clearly frames it based on Milestones and releases due to the > > time based nature of the Yocto Project. Notice that the High/Critical > > is the only one that is truly "gating" or milestone/release blocker, > > the > > Medium+, our High, won't block a milestone but be should be fixed for > > Medium+a > > release, but could be a dot.dot soon after the release. > > > > > Importance > > > The Importance of the bug is defined by its Priority and Severity. > > > The > > Priority classifies the bug's fixing order. In other words, how soon > > will it get fixed relative to other bugs? Priorities are set during > > the bug Triage meeting and cannot be changed by the user. The priority > > appears to the left of the Severity field. Here are the values that > > Priority can be set to during the Triage > > meeting: > > > > > > High -- Bug fixing is planned immediately for the target milestone. > > Milestone cannot be released if there is a high bug opened against the > > milestone. High priority issues cause major functional loss of a > > specific feature that is POR for the up-comping milestone. These > > issues are easily hit by the user and greatly impact the user > > experience or customer requirements. Finally, these issues could be > > urgent security fixes that need to be corrected in a prior release. > > The bug assignee is not to change the target milestones for High bugs > without prior approval of the Triage team. > > > Medium+ -- Bug fixing is planned before the milestone and must be > > > Medium+ fixed or > > have a solution planned before the release is finalized. These issues > > are not show-stoppers but have somewhat significant impact to system > > functions and user experience. > > > Medium -- These are important issues we keep track and try to plan > > > fixing > > for the release. They have limited impact for the system functions and > > releases. > > > Low -- Bug fixing is only done opportunistically. Generally not > > > planned for > > the up-coming project release. Issues that are not a POR feature > > request, or are hard to reproduce fall into this category. > > > Undecided -- These issues are newly reported and are undecided > > > before > > Triage. Issues that are a feature request, which isn't approved for > > future release yet. This issue will be changed to have an actual > > Priority after the Triage team approves it. > > > Note: High impact but Low Priority bugs can be documented in the > > > release > > notes. > > > > > > The Severity indicates how much the issue impacted the person > > > reporting > > the bug. Severity can be categorized into five areas. > > > > > > Critical -- Crashes, hang, loss of data, negative impact to other > > > components, > > memory leak etc. > > > Major -- Major loss of functionality of POR. > > > Normal -- Regular issue, some loss of functionality under certain > > circumstance. This is the default Severity. > > > Minor -- Minor loss of functionality, or issues with easy workaround > > available. > > > Enhancement -- Request for enhancement or new feature to be worked. > > > > I hope the helps by provide a different viewpoint from another project. > > > > Sau! > > > > [0] > > https://wiki.yoctoproject.org/wiki/Bugzilla_Configuration_and_Bug_Trac > > king > > #Importance > > > > On 7/17/19 3:41 AM, Zvonar, Bill wrote: > > > Hi Cindy, > > > > > > Thought about this some more, sorry it took me so long to respond > further. > > > > > > I agree with splitting out the definitions of release > > > priority/importance > > (which is subjective) from the technical severity (which is I'd say > > much less subjective). > > > > > > Do we agree that one of the key next steps is to define the severity > > > levels > > for defects in different domains? > > > > > > Once we have those agreed and written down somewhere, they can be > > used as guidance for people that are opening Launchpads, and for those > > that screen them. Someone will note that some bugs cross domains, so > > it's not as simple as looking at one set of severity definitions, but > > let's cross that bridge next. > > > > > > Then, if we've got general alignment on the severity definitions per > > > domain, > > we can sort out what to use as a QRC formula for a release, I think. > > > > > > Btw, it'd be nice if Launchpad had a field for Severity, so we could > > > track that > > more easily - does anybody know if we can just request this & get it > > added as a custom field? > > > > > > Bill... > > > > > > -----Original Message----- > > > From: Xie, Cindy > > > Sent: Wednesday, July 10, 2019 7:13 PM > > > To: Zvonar, Bill ; starlingx- > > discuss at lists.starlingx.io; Khalil, Ghada > > > Subject: RE: bug severity and priority > > > > > > Bill, > > > I definitely agree that not all Medium shall be pushed to stx.3.0, > > > this needs > > to be assessed carefully. But if we combine the severity and priority > > together, then this decision needs to put resource factor in consideration > as well. > > > > > > Actually, I think it's confusing of calling individual LP "gating" - > > > I understand > > that we want to get the product quality to a good shape and want to > > get bugs fixed as many as possible before we ship it. I will suggest > > to use defects# as part of release criteria (QRC). Example could be: > > > > > > Number of Critical P1 defects Zero > > > Number of High P2 defects < x > > > Number of Medium P3 defects < y > > > > > > And the only thing we need to agree on is the "x" and "y". It makes > > > TSC or > > release team to make decision easier. The QRC needs to be agreed > > earlier instead of right before the release decision shall be made. > > This way, we can really direct our engineering resource working on the > > most important items and we all have an agreed common goal. > > > > > > Thanks. - cindy > > > > > > -----Original Message----- > > > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > > > Sent: Thursday, July 11, 2019 1:39 AM > > > To: Xie, Cindy ; > > > starlingx-discuss at lists.starlingx.io; > > Khalil, Ghada > > > Subject: RE: bug severity and priority > > > > > > Hi Cindy, > > > > > > Thanks for sending this, I think this gives us something to start > > > the > > discussion. > > > > > > However we decide to align on severity/priority (I'll comment on > > > that more > > later, need to think about it more), I think we need to be careful > > before we move all mediums to 3.0, it may be too much of a Gordian knot > solution. > > > > > > I think we need to assess the mediums (as Yong suggested earlier) to > > > say > > why they should or should not be in 2.0. I also think this may help > > us sort out what our gating criteria are. > > > > > > Bill... > > > > > > -----Original Message----- > > > From: Xie, Cindy > > > Sent: Wednesday, July 10, 2019 10:42 AM > > > To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill > > ; Khalil, Ghada > > > > > Subject: bug severity and priority > > > > > > Bill/Ghada, > > > I am sending out my definition of bug severity and priority: > > > > > > Bug Exposure or Severity Definition > > > 1- Critical Product or key feature is not usable for intended purpose. > > > 2- High Product or key feature is not reliably usable for > > intended purpose or use is significantly impaired > > > 3 - Medium Product or key feature is usable provided by a workaround > > > 4 - Low Tolerable impact to user experience with minimal > > service and support costs > > > > > > Bug Priority Definition > > > P1 - Stopper Resolution of this defect takes precedence over other > defects > > and most other development activities. This level is used to focus > > maximum development team resources to resolve a defect in the shortest > > possible timeframe. > > > P2 - High Resolution of the defect has precedence over resolving other > > defects with lesser classifications of priority. The urgency to fix a > > P2 priority defect is imminent. - P2 priority defects are intended to > > be resolved by the next planned external release of the software. > > > P3 - Medium Resolution of the defect has precedence over > resolving other > > defects with lesser classifications of priority. - P3 priority > > defects must have a planned timeframe for a verified resolution. > > > P4 - Low Resolution of the defect has least urgency to resolve, P4 > > priority defects may or may not have plans to resolve. > > > > > > Let's discuss this and agree how we'd like to use them. My > > > suggestion for > > current "Medium" is to we can mark them as "stx.3.0" and then in the > > beginning of stx.3, they can move Priority to "high" due to the fact > > they want to get them fixed in 3.0. > > > > > > But the bug severity should never change because they are standard. > > > > > > Thx. - cindy > > > > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yong.hu at intel.com Fri Jul 19 15:56:35 2019 From: yong.hu at intel.com (Yong Hu) Date: Fri, 19 Jul 2019 08:56:35 -0700 Subject: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Message-ID: Hi Dean, For LP[0], there is a patch (https://review.opendev.org/#/c/651969/) in Nova upstream, what's the method/process for StarlingX to cherry-pick it for testing a LP reported in StarlingX? [0]:https://bugs.launchpad.net/starlingx/+bug/1820882 regards, Yong From chenjie.xu at intel.com Fri Jul 19 01:29:59 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Fri, 19 Jul 2019 01:29:59 +0000 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Message-ID: Hi Kunpeng, You can check the bridge and openflows by the following commands: ovs-vsctl show ovs-ofctl dump-flows br-int ovs-ofctl dump-flows br-phy0 The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs. I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted? /var/log/openvswitch/ovs-vswitchd.log /var/log/syslog neutron log (the log file is specified in /etc/neutron/neutron.conf) Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Thursday, July 18, 2019 7:09 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. Before restart ovs: controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever After: controller-0:/home/wrsroot# systemctl restart ovs-vswitchd controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff One of the VMs: [cid:image001.png at 01D53E10.A2B5CD00] On Jul 18, 2019, at 18:09, 张鲲鹏 > wrote: Hi Xu,Chenjie I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. Also, it was same when I reboot the VM with 2 SR-IOV VFs. Do you have any ideas to debug this problem? Thanks Kunpeng On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: Hi Kunpeng, Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: system modify --vswitch_type none Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 5:40 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. And do you have some easy ways to disable dpdk of openvswitch in stx1.0? I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. Thanks Kunpeng On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: Hi Kunpeng, When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: 2 network port without DPDK > VM 2 network port with DPDK > Data Network 1 network port without DPDK > OAM Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 3:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi guys, Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. Below is the StarlingX environment. 1. stx1.0 version, bootimage[1] 2. Simplex deployment 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso Kunpeng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 103649 bytes Desc: image001.png URL: From zhang.kunpeng at 99cloud.net Fri Jul 19 02:20:30 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Fri, 19 Jul 2019 10:20:30 +0800 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Message-ID: Hi Chenjie, This is the logs. At UTC 2019:02:04 I restarted the VM. In openstack.log I found some error messages, I don’t know if it’s relevant. 2019-07-19 02:04:16.141 186477 INFO eventlet.wsgi.server [req-6600ca74-1f93-4e54-88c1-35f964f1e055 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0178909 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json]: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:22.077 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.079 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:22.089 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.090 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out: Timeout: 10 seconds 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Failed to communicate with the switch: RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback (most recent call last): 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int flows = self.dump_flows(constants.CANARY_TABLE) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 147, in dump_flows 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int reply_multi=True) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 95, in _send_msg 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int raise RuntimeError(m) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. 2019-07-19 02:04:24.242 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Mapping physical network providernet-a to bridge br-phy0 2019-07-19 02:04:24.295 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Bridge br-phy0 has datapath-ID 0000f8f21e640120 Kunpeng > On Jul 19, 2019, at 09:29, Xu, Chenjie wrote: > > Hi Kunpeng, > You can check the bridge and openflows by the following commands: > ovs-vsctl show > ovs-ofctl dump-flows br-int <> > ovs-ofctl dump-flows br-phy0 > > The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs. > > I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted? > /var/log/openvswitch/ovs-vswitchd.log > /var/log/syslog > neutron log (the log file is specified in /etc/neutron/neutron.conf) > > Best Regards, > Xu, Chenjie > > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Thursday, July 18, 2019 7:09 PM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. > > Before restart ovs: > > controller-0:/home/wrsroot# ip a > 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet 127.168.204.3/24 brd 127.168.204.255 scope host lo > valid_lft forever preferred_lft forever > inet 169.254.202.2/24 scope global lo > valid_lft forever preferred_lft forever > inet 127.168.204.2/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.5/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.6/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.152/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff > 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff > inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 > valid_lft forever preferred_lft forever > inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link > valid_lft forever preferred_lft forever > 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff > 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff > 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff > 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8060/64 scope link > valid_lft forever preferred_lft forever > 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8061/64 scope link > valid_lft forever preferred_lft forever > 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff > 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff > 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:120/64 scope link > valid_lft forever preferred_lft forever > 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 > link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff > inet6 fe80::7c3a:87ff:fea3:9803/64 scope link > valid_lft forever preferred_lft forever > > After: > > controller-0:/home/wrsroot# systemctl restart ovs-vswitchd > controller-0:/home/wrsroot# ip a > 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet 127.168.204.3/24 brd 127.168.204.255 scope host lo > valid_lft forever preferred_lft forever > inet 169.254.202.2/24 scope global lo > valid_lft forever preferred_lft forever > inet 127.168.204.2/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.5/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.6/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.152/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff > 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff > inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 > valid_lft forever preferred_lft forever > inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link > valid_lft forever preferred_lft forever > 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff > 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff > 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff > 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8060/64 scope link > valid_lft forever preferred_lft forever > 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8061/64 scope link > valid_lft forever preferred_lft forever > 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff > 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff > 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 > link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:120/64 scope link > valid_lft forever preferred_lft forever > 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 > link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff > inet6 fe80::7c3a:87ff:fea3:9803/64 scope link > valid_lft forever preferred_lft forever > 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff > 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff > > One of the VMs: > > > > > On Jul 18, 2019, at 18:09, 张鲲鹏 > wrote: > > Hi Xu,Chenjie > > I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. > Also, it was same when I reboot the VM with 2 SR-IOV VFs. > Do you have any ideas to debug this problem? > > Thanks > Kunpeng > > > On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: > > Hi Kunpeng, > Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. > > Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: > system modify --vswitch_type none > > Best Regards, > Xu, Chenjie > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Tuesday, July 16, 2019 5:40 PM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > > Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. > And do you have some easy ways to disable dpdk of openvswitch in stx1.0? > I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. > > Thanks > Kunpeng > > On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: > > Hi Kunpeng, > When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. > > Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: > 2 network port without DPDK > VM > 2 network port with DPDK > Data Network > 1 network port without DPDK > OAM > > Best Regards, > Xu, Chenjie > > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Tuesday, July 16, 2019 3:54 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi guys, > > Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. > > Below is the StarlingX environment. > > 1. stx1.0 version, bootimage[1] > 2. Simplex deployment > 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. > 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. > > When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. > > It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. > > I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. > > [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso > > Kunpeng > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: log.tar Type: application/x-tar Size: 5898240 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Jul 19 16:54:53 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 19 Jul 2019 09:54:53 -0700 Subject: [Starlingx-discuss] [Build] Description and License information on hub.docker.com Message-ID: Folks, I am not sure how exactly should own this, we should probably have a Description and License section on hub.docker.com for the StarlingX containers. Similar to CentOS [0] or Ubuntu [1], they have the same basic information template and I think the license text is similar also. I know that Scott might have the keys to the docker kingdom! Thoughts? Sau! [0] https://hub.docker.com/_/centos [1] https://hub.docker.com/_/ubuntu From dtroyer at gmail.com Fri Jul 19 17:08:28 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 19 Jul 2019 12:08:28 -0500 Subject: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX In-Reply-To: References: Message-ID: On Fri, Jul 19, 2019 at 10:57 AM Yong Hu wrote: > For LP[0], there is a patch (https://review.opendev.org/#/c/651969/) in > Nova upstream, what's the method/process for StarlingX to cherry-pick it > for testing a LP reported in StarlingX? > > [0]:https://bugs.launchpad.net/starlingx/+bug/1820882 At a high level you would clone the stx-nova repo stx/stein.2 branch [0] and cherry-pick/backport the commit you want to test, and rebuild the Nova docker image and test that. I do have some Zuul jobs in starlingx/tis-repo that will pull that Nova branch and run the unit, functional and pep8 tox jobs to test in OpenStack CI. Given that the stx/stein branches also carry the NUMA live migration patches you would probably want to do whatever other live migration testing would be done upstream to validate this in the StarlingX context. dt [0] https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2 -- Dean Troyer dtroyer at gmail.com From Don.Penney at windriver.com Fri Jul 19 18:03:42 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 19 Jul 2019 18:03:42 +0000 Subject: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC152D0F3@ALA-MBD.corp.ad.wrs.com> For updating an image for testing, take a look at: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Incremental_Image_Updates As Dean notes, clone the repo and cherry-pick the commit, and then do something like: time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src ${path_to_cloned_repo}/stx-nova ie: # clone stx-nova, get stx/stein.2 cd /localdisk/loadbuild/dpenney/ mkdir nova-update cd nova-update/ git clone https://github.com/starlingx-staging/stx-nova.git cd stx-nova/ git fetch https://github.com/starlingx-staging/stx-nova.git stx/stein.2 git checkout FETCH_HEAD # cherry-pick update git fetch https://review.opendev.org/openstack/nova refs/changes/69/651969/13 && git cherry-pick FETCH_HEAD # Fix up conflicts, etc # Build updated image, from 20190715T233000Z build as base, # specifying cloned/modified repo time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src /localdisk/loadbuild/dpenney/nova-update/stx-nova \ --user dpenney This produces an updated image in the local registry: Updated image: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 If you also set a registry with --registry and use the --push option, the command will push the updated image to that registry. This also produces an image record file: $ cat ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 Which you can pass as an argument to build-helm-charts.sh when building your application tarball for testing: build-helm-charts.sh \ --image-record http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/docker-images/images-centos-stable-versioned.lst \ --image-record ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst \ --label centos-stable-versioned If you look at the yaml file in the tarball, you can see that it now references the updated image: $ tar xzf ${MY_WORKSPACE}/std/build-helm/stx/stx-openstack-1.0-17-centos-stable-versioned.tgz -O ./stx-openstack.yaml | grep stx-nova: nova_api: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_cell_setup: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ironic: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ssh: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_conductor: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_consoleauth: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_db_sync: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_novncproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_placement: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_scheduler: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy_assets: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, July 19, 2019 1:08 PM To: starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX On Fri, Jul 19, 2019 at 10:57 AM Yong Hu wrote: > For LP[0], there is a patch (https://review.opendev.org/#/c/651969/) in > Nova upstream, what's the method/process for StarlingX to cherry-pick it > for testing a LP reported in StarlingX? > > [0]:https://bugs.launchpad.net/starlingx/+bug/1820882 At a high level you would clone the stx-nova repo stx/stein.2 branch [0] and cherry-pick/backport the commit you want to test, and rebuild the Nova docker image and test that. I do have some Zuul jobs in starlingx/tis-repo that will pull that Nova branch and run the unit, functional and pep8 tox jobs to test in OpenStack CI. Given that the stx/stein branches also carry the NUMA live migration patches you would probably want to do whatever other live migration testing would be done upstream to validate this in the StarlingX context. dt [0] https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Fri Jul 19 21:37:47 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 19 Jul 2019 21:37:47 +0000 Subject: [Starlingx-discuss] stx.3.0 Milestone-1 Declared Message-ID: <151EE31B9FCCA54397A757BC674650F0C15705C9@ALA-MBD.corp.ad.wrs.com> As per review in the StarlingX Release meeting on July 18/2019, stx.3.0 Milestone-1 has been declared. The minutes are as follows: - Milestone-1 - Initial Content List available - https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 - Reviewed list and agreed that it meets MS-1 criteria. Milestone declared - Open Action: Need to determine review bandwidth for stx.3.0 and how the features are prioritized in terms of that review bandwidth - A key input here is having a view of where the code churn is expected for each feature Regards, Ghada -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, July 15, 2019 6:27 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx.3.0 Milestone-1 status Hello all, The stx.3.0 milestone-1 is planned for this week. The criteria for the milestone are as follows: - Release priorities and major features defined. - High level resourcing secured. Reference: https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Milestones Based on the candidate feature list that has been reviewed with the TSC and project leads from the community, the release planning team feels that we are in a good position for the milestone. The candidate list is available at: https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 Additional features/specs can still be proposed/reviewed. The next milestone is spec freeze which is currently scheduled for the week of August 12. At that point, no new specs will be considered for stx.3.0. The milestone will be more formally reviewed in the next community meeting on July 17/2019. Regards, Ghada _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Fri Jul 19 21:42:05 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 19 Jul 2019 21:42:05 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - July 18/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0C15705DB@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release meeting agenda / notes July 18 2019 stx.2.0 - Feature Exception Status - Code Removal Stories: 2 reviews left to merge - K8s upversion to 1.5: Expected to merge Friday/Monday - Feature Testing Status - Tracker: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1717644237 - Containers - Total / Pass / Fail / Blocked - Ada - 75 / 57 / 1 / 5 - 97% progress - Numan - 82/58/3/0 - 74% progress - Containers - helm overrides & ironic 12 / 0 / 0 / 0 - Regression Testing Status - Tracker: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 - Second round of regression is in progress by Ada's team. - Stable / low risk domains are identified and will be tested later as a second priority - Ada's team helping with testing of some additional domains such as: storage - Open issues reported from regression: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.regression - Running with ISO 20190712T013000Z - Total / Pass / Fail / Blocked = 415 / 250 / 8 / 34 - Pass rate: 96.8% - Automated Regression - Working on building a report to share with the community - Bugs - Bill is collecting input from the different project leads on the status of the stx.2.0 launchpads - Release Notes - Saul raised this; we should start thinking about what we need here. Discuss in next meeting. stx.3.0 - Milestone-1 - Initial Content List available - https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 - Reviewed list and agreed that it meets MS-1 criteria. Milestone declared - Open Action: Need to determine review bandwidth for stx.3.0 and how the features are prioritized in terms of that review bandwidth - A key input here is having a view of where the code churn is expected for each feature Blueprints for Feature Backlog - Openstack foundation still recommends not using launchpad as the foundation is looking to transition away from LP. However, other teams continue to use LP and there is no time-frame to fully transition off of it - For stx.3.0, we have the list of the features already in the release planning google sheet, so no need to do anything else for that - Bill suggests that we can use a google sheet for feature backlog management as well - The key here is the backlog needs to be groomed and maintained. That's the key issue as opposed to the tool used itself. - Next step is to make sure the TSC team has no big concerns with using a google sheet for the backlog - Once we have agreement, the items discussed in the PTG (which didn't make stx.3.0) can be copied over From Frank.Miller at windriver.com Fri Jul 19 21:45:17 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 19 Jul 2019 21:45:17 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Weekly Meeting Message-ID: Team Agenda for July 22 meeting: 1. SB status: a) Review 2005860 kubernetes upversion status; decision on stx.2.0 or stx.3.0 [Al/Brent] b) Mingyuan's final commit for 2004760 (ironic): https://review.opendev.org/#/c/669782/ 2. stx.2.0 gating bugs: 33 (down from 41 one week ago - thanks for getting issues addressed!) a) Plan for scrub of medium priority [Frank] b) Updates from primes for gating LPs that do not yet have solid forecasts: Bart: Application recovery from various controller events [1837055, 1833730, 1829931, 1829432, 1816842] Bob: Application apply or re-apply failures [1836609, Shuicheng: Application apply or re-apply failures [1829936, 1836378] Erich: hypervisor remains down after force lock/unlock [1824881] Ovidiu: Pods on existing worker nodes restart when adding a new worker node [1820902] c) Others? Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3165 bytes Desc: not available URL: From maria.g.perez.ibarra at intel.com Fri Jul 19 23:42:54 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 19 Jul 2019 23:42:54 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190719 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-19 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] 1 TCs FAIL Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] 1 TCs FAIL Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Application platform-integ-apps is not automatically applied, it remains on "uploaded" status. Ceph reports https://bugs.launchpad.net/starlingx/+bug/1837263 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Jul 19 23:55:43 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 19 Jul 2019 23:55:43 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2B35DDE5@FMSMSX125.amr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2B35DD6E@FMSMSX125.amr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FE52F4@SHSMSX104.ccr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2B35DDE5@FMSMSX125.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE5587@SHSMSX104.ccr.corp.intel.com> Another idea is to using mailing list: each day, triage lead sends out a list of "new" bugs need triage and sub-project leads response in mailing list so that we keep the information public, we can assign bugs to appropriate owners (or people volunteer). Thx. - cindy -----Original Message----- From: Perez Carranza, Jose Sent: Friday, July 19, 2019 11:02 PM To: Xie, Cindy ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] bug severity and priority > -----Original Message----- > From: Xie, Cindy > Sent: Friday, July 19, 2019 9:35 AM > To: Perez Carranza, Jose ; Saul Wold > ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] bug severity and priority > > Jose, > Just to clarify: for the weekly bug triage meeting, you only ask to > triage the new bugs, right? Yes, only the new ones should be triaged. > > My concern is about the triage frequency: right now, the new bugs are > triaged almost on daily basis, mostly by Ghada by consulting technical expert. > If we switch to a triage meeting, now sure how the new LP can be > handled timely. > > But agree that having a triage meeting is a good idea. > Thx. - cindy > To mitigate this concern as Saul pointed out we should ensure to have a "triage section"section on subproject meeting but ensuring all the stakeholders for the specific bugs are online to provide feedback. > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: Friday, July 19, 2019 8:16 PM > To: Saul Wold ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] bug severity and priority > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Thursday, July 18, 2019 10:57 PM > > To: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] bug severity and priority > > > > > > Folks, > > > > As I mentioned in a prior email about a previous project (Yocto > > Project), we were also time-based (every 6 months). We defined > > Importance [0] of the bug based on Severity (chosen by submitter) > > and Priority (assigned during a triage process). We had 5 Priory > > levels in > > Bugzilla: High, Medium+, Medium, Low and Undecided, these would map > to > > our Critical, High, Medium, Low and Undecided. > > Those triage meetings were very helpful because they were live > discussions about the bugs with all the stockholders. I think we > should consider to have a weekly meeting just to triage bugs. > > Regards, > José > > > > This clearly frames it based on Milestones and releases due to the > > time based nature of the Yocto Project. Notice that the > > High/Critical is the only one that is truly "gating" or > > milestone/release blocker, the > > Medium+, our High, won't block a milestone but be should be fixed > > Medium+for a > > release, but could be a dot.dot soon after the release. > > > > > Importance > > > The Importance of the bug is defined by its Priority and Severity. > > > The > > Priority classifies the bug's fixing order. In other words, how soon > > will it get fixed relative to other bugs? Priorities are set during > > the bug Triage meeting and cannot be changed by the user. The > > priority appears to the left of the Severity field. Here are the > > values that Priority can be set to during the Triage > > meeting: > > > > > > High -- Bug fixing is planned immediately for the target milestone. > > Milestone cannot be released if there is a high bug opened against > > the milestone. High priority issues cause major functional loss of a > > specific feature that is POR for the up-comping milestone. These > > issues are easily hit by the user and greatly impact the user > > experience or customer requirements. Finally, these issues could be > > urgent security fixes that need to be corrected in a prior release. > > The bug assignee is not to change the target milestones for High > > bugs > without prior approval of the Triage team. > > > Medium+ -- Bug fixing is planned before the milestone and must be > > > Medium+ fixed or > > have a solution planned before the release is finalized. These > > issues are not show-stoppers but have somewhat significant impact to > > system functions and user experience. > > > Medium -- These are important issues we keep track and try to plan > > > fixing > > for the release. They have limited impact for the system functions > > and releases. > > > Low -- Bug fixing is only done opportunistically. Generally not > > > planned for > > the up-coming project release. Issues that are not a POR feature > > request, or are hard to reproduce fall into this category. > > > Undecided -- These issues are newly reported and are undecided > > > before > > Triage. Issues that are a feature request, which isn't approved for > > future release yet. This issue will be changed to have an actual > > Priority after the Triage team approves it. > > > Note: High impact but Low Priority bugs can be documented in the > > > release > > notes. > > > > > > The Severity indicates how much the issue impacted the person > > > reporting > > the bug. Severity can be categorized into five areas. > > > > > > Critical -- Crashes, hang, loss of data, negative impact to other > > > components, > > memory leak etc. > > > Major -- Major loss of functionality of POR. > > > Normal -- Regular issue, some loss of functionality under certain > > circumstance. This is the default Severity. > > > Minor -- Minor loss of functionality, or issues with easy > > > workaround > > available. > > > Enhancement -- Request for enhancement or new feature to be worked. > > > > I hope the helps by provide a different viewpoint from another project. > > > > Sau! > > > > [0] > > https://wiki.yoctoproject.org/wiki/Bugzilla_Configuration_and_Bug_Tr > > ac > > king > > #Importance > > > > On 7/17/19 3:41 AM, Zvonar, Bill wrote: > > > Hi Cindy, > > > > > > Thought about this some more, sorry it took me so long to respond > further. > > > > > > I agree with splitting out the definitions of release > > > priority/importance > > (which is subjective) from the technical severity (which is I'd say > > much less subjective). > > > > > > Do we agree that one of the key next steps is to define the > > > severity levels > > for defects in different domains? > > > > > > Once we have those agreed and written down somewhere, they can be > > used as guidance for people that are opening Launchpads, and for > > those that screen them. Someone will note that some bugs cross > > domains, so it's not as simple as looking at one set of severity > > definitions, but let's cross that bridge next. > > > > > > Then, if we've got general alignment on the severity definitions > > > per domain, > > we can sort out what to use as a QRC formula for a release, I think. > > > > > > Btw, it'd be nice if Launchpad had a field for Severity, so we > > > could track that > > more easily - does anybody know if we can just request this & get it > > added as a custom field? > > > > > > Bill... > > > > > > -----Original Message----- > > > From: Xie, Cindy > > > Sent: Wednesday, July 10, 2019 7:13 PM > > > To: Zvonar, Bill ; starlingx- > > discuss at lists.starlingx.io; Khalil, Ghada > > > > > Subject: RE: bug severity and priority > > > > > > Bill, > > > I definitely agree that not all Medium shall be pushed to stx.3.0, > > > this needs > > to be assessed carefully. But if we combine the severity and > > priority together, then this decision needs to put resource factor > > in consideration > as well. > > > > > > Actually, I think it's confusing of calling individual LP "gating" > > > - I understand > > that we want to get the product quality to a good shape and want to > > get bugs fixed as many as possible before we ship it. I will suggest > > to use defects# as part of release criteria (QRC). Example could be: > > > > > > Number of Critical P1 defects Zero > > > Number of High P2 defects < x > > > Number of Medium P3 defects < y > > > > > > And the only thing we need to agree on is the "x" and "y". It > > > makes TSC or > > release team to make decision easier. The QRC needs to be agreed > > earlier instead of right before the release decision shall be made. > > This way, we can really direct our engineering resource working on > > the most important items and we all have an agreed common goal. > > > > > > Thanks. - cindy > > > > > > -----Original Message----- > > > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > > > Sent: Thursday, July 11, 2019 1:39 AM > > > To: Xie, Cindy ; > > > starlingx-discuss at lists.starlingx.io; > > Khalil, Ghada > > > Subject: RE: bug severity and priority > > > > > > Hi Cindy, > > > > > > Thanks for sending this, I think this gives us something to start > > > the > > discussion. > > > > > > However we decide to align on severity/priority (I'll comment on > > > that more > > later, need to think about it more), I think we need to be careful > > before we move all mediums to 3.0, it may be too much of a Gordian > > knot > solution. > > > > > > I think we need to assess the mediums (as Yong suggested earlier) > > > to say > > why they should or should not be in 2.0. I also think this may help > > us sort out what our gating criteria are. > > > > > > Bill... > > > > > > -----Original Message----- > > > From: Xie, Cindy > > > Sent: Wednesday, July 10, 2019 10:42 AM > > > To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill > > ; Khalil, Ghada > > > > > Subject: bug severity and priority > > > > > > Bill/Ghada, > > > I am sending out my definition of bug severity and priority: > > > > > > Bug Exposure or Severity Definition > > > 1- Critical Product or key feature is not usable for intended purpose. > > > 2- High Product or key feature is not reliably usable for > > intended purpose or use is significantly impaired > > > 3 - Medium Product or key feature is usable provided by a workaround > > > 4 - Low Tolerable impact to user experience with minimal > > service and support costs > > > > > > Bug Priority Definition > > > P1 - Stopper Resolution of this defect takes precedence over other > defects > > and most other development activities. This level is used to focus > > maximum development team resources to resolve a defect in the > > shortest possible timeframe. > > > P2 - High Resolution of the defect has precedence over resolving other > > defects with lesser classifications of priority. The urgency to fix > > a > > P2 priority defect is imminent. - P2 priority defects are intended > > to be resolved by the next planned external release of the software. > > > P3 - Medium Resolution of the defect has precedence over > resolving other > > defects with lesser classifications of priority. - P3 priority > > defects must have a planned timeframe for a verified resolution. > > > P4 - Low Resolution of the defect has least urgency to resolve, P4 > > priority defects may or may not have plans to resolve. > > > > > > Let's discuss this and agree how we'd like to use them. My > > > suggestion for > > current "Medium" is to we can mark them as "stx.3.0" and then in the > > beginning of stx.3, they can move Priority to "high" due to the fact > > they want to get them fixed in 3.0. > > > > > > But the bug severity should never change because they are standard. > > > > > > Thx. - cindy > > > > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu > > > ss > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Robert.Church at windriver.com Mon Jul 22 06:01:48 2019 From: Robert.Church at windriver.com (Church, Robert) Date: Mon, 22 Jul 2019 06:01:48 +0000 Subject: [Starlingx-discuss] stx-openstack chart optionality Message-ID: <3E093B33-2DBF-4378-B37C-D61EF58BDC77@windriver.com> Here's an update with regards to behavioral changes for optional charts/services. Current behavior: ----------------- With commit https://opendev.org/starlingx/config/commit/e6b177eb93f85b5a4e53242214060c97728e2048, Barbican and the Telemetry services (aodh, gnocchi, ceilometer, panko) are disabled by default. To enable these services, with the current builds, you must introduce a label to a host as follows: * system host-label-assign controller-0 openstack-barbican=enabled * system host-label-assign controller-0 openstack-telemetry=enabled This follows the existing pattern established for enabling ironic. Future behavior: ---------------- It should be noted that this behavior is transitional as I have up for review: https://review.opendev.org/#/c/671950/. With this update, each chart within an application can be enabled/disabled from the command line prior to application apply. Again, by default: The following stx-openstack charts will be disabled on application upload per the metadata packaged with the application: disabled_charts: - aodh - barbican - ceilometer - gnocchi - ironic - panko The current enablement state of a chart can be seen with: $ system helm-override-show stx-openstack aodh openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack barbican openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack ceilometer openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack gnocchi openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack ironic openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack panko openstack | grep enabled | attributes | enabled: false | and a chart can be enabled/disabled with: $ system help helm-chart-modify usage: system helm-chart-modify [--enabled ] Modify helm chart attributes. This function is provided to modify system behaviorial attributes related to a chart. Chart overrides are not managed through this command. Positional arguments: Name of the application Name of the chart Namespace of the chart Optional arguments: --enabled Chart enabled. $ system helm-chart-modify stx-openstack barbican openstack --enable=true +------------------+--------------------+ | Property | Value | +------------------+--------------------+ | name | barbican | | namespace | openstack | | system_overrides | {u'enabled': True} | +------------------+--------------------+ $ system helm-override-show stx-openstack aodh openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack barbican openstack | grep enabled | attributes | enabled: true | $ system helm-override-show stx-openstack ceilometer openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack gnocchi openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack ironic openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack panko openstack | grep enabled | attributes | enabled: false | When a chart is disabled, it is removed dynamically removed from its chart group via the application's Armada manifest operator during override generation. When a chart is enabled, additional system critera may be applied by a chart plugin to disable the chart if a specific system configuration is not met. Thanks, Bob From zhipengs.liu at intel.com Mon Jul 22 07:14:55 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 22 Jul 2019 07:14:55 +0000 Subject: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC152D0F3@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FC152D0F3@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C7530AF17A@SHSMSX104.ccr.corp.intel.com> Hi Don, I failed to update image. Could you give me some help, thanks! ~/starlingx/cgcs-root/build-tools/build-docker-images$ bash update-stx-image.sh \ > --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ > --module-src ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova --user zhipengl ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: wget http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar --2019-07-22 02:57:20-- http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar Resolving mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)... 135.84.104.40 Connecting to mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)|135.84.104.40|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 63354880 (60M) [application/octet-stream] Saving to: 'stx-centos-stable-wheels.tar' stx-centos-stable-wheels.tar 100%[===========================================================================>] 60.42M 7.41MB/s in 14s 2019-07-22 02:57:34 (4.42 MB/s) - 'stx-centos-stable-wheels.tar' saved [63354880/63354880] ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/modules/stx-nova ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) sed: -e expression #1, char 20: unterminated `s' command ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: docker image pull starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 master-centos-stable-20190715T233000Z.0: Pulling from starlingx/stx-nova 5ad559c5ae16: Already exists 3d7ee0b84e39: Pull complete e1047bfd73cf: Pull complete 16f151ad544f: Pull complete 844126c15d7e: Pull complete 869460797821: Pull complete 0c319da8f09d: Pull complete 2e884702ad07: Pull complete Digest: sha256:4032358f3ab208e76c2737854fdcf4f7bd582ef8f91e150193ded8feb662fdbe Status: Downloaded newer image for starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 + bash -x /image-update/internal-update-stx-image.sh + UPDATES_DIR=/image-update + PIP_PACKAGES_DIR=/image-update/pip-packages + DIST_PACKAGES_DIR=/image-update/dist-packages + CUSTOMIZATION_SCRIPT=/image-update/customize.sh ++ source /etc/os-release +++ NAME='CentOS Linux' +++ VERSION='7 (Core)' +++ ID=centos +++ ID_LIKE='rhel fedora' +++ VERSION_ID=7 +++ PRETTY_NAME='CentOS Linux 7 (Core)' +++ ANSI_COLOR='0;31' +++ CPE_NAME=cpe:/o:centos:centos:7 +++ HOME_URL=https://www.centos.org/ +++ BUG_REPORT_URL=https://bugs.centos.org/ +++ CENTOS_MANTISBT_PROJECT=CentOS-7 +++ CENTOS_MANTISBT_PROJECT_VERSION=7 +++ REDHAT_SUPPORT_PRODUCT=centos +++ REDHAT_SUPPORT_PRODUCT_VERSION=7 ++ echo CentOS Linux + OS_NAME='CentOS Linux' ++ getopt -o h -l help: -- + OPTS=' --' + '[' 0 -ne 0 ']' + eval set -- ' --' ++ set -- -- + true + case $1 in + shift + break + install_dist_packages + local -i file_count=0 ++ find /image-update/dist-packages -type f ++ wc -l + file_count=0 + '[' 0 -eq 0 ']' + return 0 + install_pip_packages + local modules + local wheels ++ find /image-update/pip-packages/modules/stx-nova -maxdepth 0 -type d + modules=/image-update/pip-packages/modules/stx-nova ++ find /image-update/pip-packages/wheels/ -type f -name '*.whl' + wheels= + '[' -z /image-update/pip-packages/modules/stx-nova -a -z '' ']' + pip install -vvv --no-deps --no-index --pre --no-cache-dir --only-binary :all: --no-compile --force-reinstall /image-update/pip-packages/modules/stx-nova DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Ignoring indexes: https://pypi.org/simple Created temporary directory: /tmp/pip-ephem-wheel-cache-Pld6sg Created temporary directory: /tmp/pip-req-tracker-5OQGeX Created requirements tracker '/tmp/pip-req-tracker-5OQGeX' Created temporary directory: /tmp/pip-install-8tWna2 Processing /image-update/pip-packages/modules/stx-nova Created temporary directory: /tmp/pip-req-build-EngJ2Q Added file:///image-update/pip-packages/modules/stx-nova to build tracker '/tmp/pip-req-tracker-5OQGeX' Running setup.py (path:/tmp/pip-req-build-EngJ2Q/setup.py) egg_info for package from file:///image-update/pip-packages/modules/stx-nova Running command python setup.py egg_info ERROR:root:Error parsing Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pbr/core.py", line 96, in pbr attrs = util.cfg_to_args(path, dist.script_args) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/util.py", line 256, in cfg_to_args pbr.hooks.setup_hook(config) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/__init__.py", line 25, in setup_hook metadata_config.run() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run self.hook() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook self.config['name'], self.config.get('version', None)) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/packaging.py", line 849, in get_version name=package_name)) Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. error in setup command: Error parsing /tmp/pip-req-build-EngJ2Q/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. Cleaning up... Removing source in /tmp/pip-req-build-EngJ2Q Removed file:///image-update/pip-packages/modules/stx-nova from build tracker '/tmp/pip-req-tracker-5OQGeX' Removed build tracker '/tmp/pip-req-tracker-5OQGeX' ERROR: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ Exception information: Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 352, in run resolver.resolve(requirement_set) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 131, in resolve self._resolve_one(requirement_set, req) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 294, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 242, in _get_abstract_dist_for self.require_hashes File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 362, in prepare_linked_requirement abstract_dist.prep_for_dist(finder, self.build_isolation) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 171, in prep_for_dist self.req.prepare_metadata() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 537, in prepare_metadata self.run_egg_info() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 615, in run_egg_info command_desc='python setup.py egg_info') File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/utils/misc.py", line 776, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ + '[' 1 -ne 0 ']' + echo 'Failed pip install' Failed pip install + exit 1 Failed to update image: starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月20日 2:04 To: Dean Troyer ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX For updating an image for testing, take a look at: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Incremental_Image_Updates As Dean notes, clone the repo and cherry-pick the commit, and then do something like: time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src ${path_to_cloned_repo}/stx-nova ie: # clone stx-nova, get stx/stein.2 cd /localdisk/loadbuild/dpenney/ mkdir nova-update cd nova-update/ git clone https://github.com/starlingx-staging/stx-nova.git cd stx-nova/ git fetch https://github.com/starlingx-staging/stx-nova.git stx/stein.2 git checkout FETCH_HEAD # cherry-pick update git fetch https://review.opendev.org/openstack/nova refs/changes/69/651969/13 && git cherry-pick FETCH_HEAD # Fix up conflicts, etc # Build updated image, from 20190715T233000Z build as base, # specifying cloned/modified repo time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src /localdisk/loadbuild/dpenney/nova-update/stx-nova \ --user dpenney This produces an updated image in the local registry: Updated image: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 If you also set a registry with --registry and use the --push option, the command will push the updated image to that registry. This also produces an image record file: $ cat ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 Which you can pass as an argument to build-helm-charts.sh when building your application tarball for testing: build-helm-charts.sh \ --image-record http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/docker-images/images-centos-stable-versioned.lst \ --image-record ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst \ --label centos-stable-versioned If you look at the yaml file in the tarball, you can see that it now references the updated image: $ tar xzf ${MY_WORKSPACE}/std/build-helm/stx/stx-openstack-1.0-17-centos-stable-versioned.tgz -O ./stx-openstack.yaml | grep stx-nova: nova_api: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_cell_setup: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ironic: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ssh: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_conductor: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_consoleauth: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_db_sync: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_novncproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_placement: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_scheduler: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy_assets: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, July 19, 2019 1:08 PM To: starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX On Fri, Jul 19, 2019 at 10:57 AM Yong Hu wrote: > For LP[0], there is a patch (https://review.opendev.org/#/c/651969/) > in Nova upstream, what's the method/process for StarlingX to > cherry-pick it for testing a LP reported in StarlingX? > > [0]:https://bugs.launchpad.net/starlingx/+bug/1820882 At a high level you would clone the stx-nova repo stx/stein.2 branch [0] and cherry-pick/backport the commit you want to test, and rebuild the Nova docker image and test that. I do have some Zuul jobs in starlingx/tis-repo that will pull that Nova branch and run the unit, functional and pep8 tox jobs to test in OpenStack CI. Given that the stx/stein branches also carry the NUMA live migration patches you would probably want to do whatever other live migration testing would be done upstream to validate this in the StarlingX context. dt [0] https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Mon Jul 22 13:44:11 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 22 Jul 2019 09:44:11 -0400 Subject: [Starlingx-discuss] [Build] Description and License information on hub.docker.com In-Reply-To: References: Message-ID: <28a1987d-30d9-3eec-a7dc-9d814198e560@windriver.com> Good idea. I think we would have to do it individually for each image.  Not sure what the options would be if we had formal project status.  Will have to look into it. Don or I can make it happen.  We need to decide on how the text is to read, and what links we will point to for more detailed information. Scott On 2019-07-19 12:54 p.m., Saul Wold wrote: > > Folks, > > I am not sure how exactly should own this, we should probably have a > Description and License section on hub.docker.com for the StarlingX > containers. > > Similar to CentOS [0] or Ubuntu [1], they have the same basic > information template and I think the license text is similar also. > > I know that Scott might have the keys to the docker kingdom! > > Thoughts? > > Sau! > [0] https://hub.docker.com/_/centos > [1] https://hub.docker.com/_/ubuntu > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Mon Jul 22 13:59:20 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Mon, 22 Jul 2019 13:59:20 +0000 Subject: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX In-Reply-To: <93814834B4855241994F290E959305C7530AF17A@SHSMSX104.ccr.corp.intel.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FC152D0F3@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C7530AF17A@SHSMSX104.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC152D3B5@ALA-MBD.corp.ad.wrs.com> Hi Zhipeng, Can you verify that you can run the following in your cloned repo? [/localdisk/loadbuild/dpenney/nova-update/stx-nova]$ python ./setup.py --version 19.0.1.dev116 -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Monday, July 22, 2019 3:15 AM To: Penney, Don; Dean Troyer; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Hi Don, I failed to update image. Could you give me some help, thanks! ~/starlingx/cgcs-root/build-tools/build-docker-images$ bash update-stx-image.sh \ > --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ > --module-src ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova --user zhipengl ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: wget http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar --2019-07-22 02:57:20-- http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar Resolving mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)... 135.84.104.40 Connecting to mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)|135.84.104.40|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 63354880 (60M) [application/octet-stream] Saving to: 'stx-centos-stable-wheels.tar' stx-centos-stable-wheels.tar 100%[===========================================================================>] 60.42M 7.41MB/s in 14s 2019-07-22 02:57:34 (4.42 MB/s) - 'stx-centos-stable-wheels.tar' saved [63354880/63354880] ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/modules/stx-nova ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) sed: -e expression #1, char 20: unterminated `s' command ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: docker image pull starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 master-centos-stable-20190715T233000Z.0: Pulling from starlingx/stx-nova 5ad559c5ae16: Already exists 3d7ee0b84e39: Pull complete e1047bfd73cf: Pull complete 16f151ad544f: Pull complete 844126c15d7e: Pull complete 869460797821: Pull complete 0c319da8f09d: Pull complete 2e884702ad07: Pull complete Digest: sha256:4032358f3ab208e76c2737854fdcf4f7bd582ef8f91e150193ded8feb662fdbe Status: Downloaded newer image for starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 + bash -x /image-update/internal-update-stx-image.sh + UPDATES_DIR=/image-update + PIP_PACKAGES_DIR=/image-update/pip-packages + DIST_PACKAGES_DIR=/image-update/dist-packages + CUSTOMIZATION_SCRIPT=/image-update/customize.sh ++ source /etc/os-release +++ NAME='CentOS Linux' +++ VERSION='7 (Core)' +++ ID=centos +++ ID_LIKE='rhel fedora' +++ VERSION_ID=7 +++ PRETTY_NAME='CentOS Linux 7 (Core)' +++ ANSI_COLOR='0;31' +++ CPE_NAME=cpe:/o:centos:centos:7 +++ HOME_URL=https://www.centos.org/ +++ BUG_REPORT_URL=https://bugs.centos.org/ +++ CENTOS_MANTISBT_PROJECT=CentOS-7 +++ CENTOS_MANTISBT_PROJECT_VERSION=7 +++ REDHAT_SUPPORT_PRODUCT=centos +++ REDHAT_SUPPORT_PRODUCT_VERSION=7 ++ echo CentOS Linux + OS_NAME='CentOS Linux' ++ getopt -o h -l help: -- + OPTS=' --' + '[' 0 -ne 0 ']' + eval set -- ' --' ++ set -- -- + true + case $1 in + shift + break + install_dist_packages + local -i file_count=0 ++ find /image-update/dist-packages -type f ++ wc -l + file_count=0 + '[' 0 -eq 0 ']' + return 0 + install_pip_packages + local modules + local wheels ++ find /image-update/pip-packages/modules/stx-nova -maxdepth 0 -type d + modules=/image-update/pip-packages/modules/stx-nova ++ find /image-update/pip-packages/wheels/ -type f -name '*.whl' + wheels= + '[' -z /image-update/pip-packages/modules/stx-nova -a -z '' ']' + pip install -vvv --no-deps --no-index --pre --no-cache-dir --only-binary :all: --no-compile --force-reinstall /image-update/pip-packages/modules/stx-nova DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Ignoring indexes: https://pypi.org/simple Created temporary directory: /tmp/pip-ephem-wheel-cache-Pld6sg Created temporary directory: /tmp/pip-req-tracker-5OQGeX Created requirements tracker '/tmp/pip-req-tracker-5OQGeX' Created temporary directory: /tmp/pip-install-8tWna2 Processing /image-update/pip-packages/modules/stx-nova Created temporary directory: /tmp/pip-req-build-EngJ2Q Added file:///image-update/pip-packages/modules/stx-nova to build tracker '/tmp/pip-req-tracker-5OQGeX' Running setup.py (path:/tmp/pip-req-build-EngJ2Q/setup.py) egg_info for package from file:///image-update/pip-packages/modules/stx-nova Running command python setup.py egg_info ERROR:root:Error parsing Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pbr/core.py", line 96, in pbr attrs = util.cfg_to_args(path, dist.script_args) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/util.py", line 256, in cfg_to_args pbr.hooks.setup_hook(config) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/__init__.py", line 25, in setup_hook metadata_config.run() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run self.hook() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook self.config['name'], self.config.get('version', None)) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/packaging.py", line 849, in get_version name=package_name)) Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. error in setup command: Error parsing /tmp/pip-req-build-EngJ2Q/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. Cleaning up... Removing source in /tmp/pip-req-build-EngJ2Q Removed file:///image-update/pip-packages/modules/stx-nova from build tracker '/tmp/pip-req-tracker-5OQGeX' Removed build tracker '/tmp/pip-req-tracker-5OQGeX' ERROR: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ Exception information: Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 352, in run resolver.resolve(requirement_set) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 131, in resolve self._resolve_one(requirement_set, req) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 294, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 242, in _get_abstract_dist_for self.require_hashes File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 362, in prepare_linked_requirement abstract_dist.prep_for_dist(finder, self.build_isolation) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 171, in prep_for_dist self.req.prepare_metadata() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 537, in prepare_metadata self.run_egg_info() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 615, in run_egg_info command_desc='python setup.py egg_info') File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/utils/misc.py", line 776, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ + '[' 1 -ne 0 ']' + echo 'Failed pip install' Failed pip install + exit 1 Failed to update image: starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月20日 2:04 To: Dean Troyer ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX For updating an image for testing, take a look at: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Incremental_Image_Updates As Dean notes, clone the repo and cherry-pick the commit, and then do something like: time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src ${path_to_cloned_repo}/stx-nova ie: # clone stx-nova, get stx/stein.2 cd /localdisk/loadbuild/dpenney/ mkdir nova-update cd nova-update/ git clone https://github.com/starlingx-staging/stx-nova.git cd stx-nova/ git fetch https://github.com/starlingx-staging/stx-nova.git stx/stein.2 git checkout FETCH_HEAD # cherry-pick update git fetch https://review.opendev.org/openstack/nova refs/changes/69/651969/13 && git cherry-pick FETCH_HEAD # Fix up conflicts, etc # Build updated image, from 20190715T233000Z build as base, # specifying cloned/modified repo time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src /localdisk/loadbuild/dpenney/nova-update/stx-nova \ --user dpenney This produces an updated image in the local registry: Updated image: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 If you also set a registry with --registry and use the --push option, the command will push the updated image to that registry. This also produces an image record file: $ cat ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 Which you can pass as an argument to build-helm-charts.sh when building your application tarball for testing: build-helm-charts.sh \ --image-record http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/docker-images/images-centos-stable-versioned.lst \ --image-record ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst \ --label centos-stable-versioned If you look at the yaml file in the tarball, you can see that it now references the updated image: $ tar xzf ${MY_WORKSPACE}/std/build-helm/stx/stx-openstack-1.0-17-centos-stable-versioned.tgz -O ./stx-openstack.yaml | grep stx-nova: nova_api: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_cell_setup: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ironic: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ssh: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_conductor: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_consoleauth: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_db_sync: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_novncproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_placement: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_scheduler: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy_assets: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, July 19, 2019 1:08 PM To: starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX On Fri, Jul 19, 2019 at 10:57 AM Yong Hu wrote: > For LP[0], there is a patch (https://review.opendev.org/#/c/651969/) > in Nova upstream, what's the method/process for StarlingX to > cherry-pick it for testing a LP reported in StarlingX? > > [0]:https://bugs.launchpad.net/starlingx/+bug/1820882 At a high level you would clone the stx-nova repo stx/stein.2 branch [0] and cherry-pick/backport the commit you want to test, and rebuild the Nova docker image and test that. I do have some Zuul jobs in starlingx/tis-repo that will pull that Nova branch and run the unit, functional and pep8 tox jobs to test in OpenStack CI. Given that the stx/stein branches also carry the NUMA live migration patches you would probably want to do whatever other live migration testing would be done upstream to validate this in the StarlingX context. dt [0] https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Anirudh.Gupta at hsc.com Fri Jul 19 04:13:24 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Fri, 19 Jul 2019 04:13:24 +0000 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Message-ID: Hi Team, I am following the below document to set up AIO-Simplex R2.0 with the green build dated 17-July. https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html I have successfully verified the endpoints, using the command openstack endpoint list Issue 1 :- The endpoint list contains endpoint of the services fm,patching,vim,smapi,keystone,barbacian and sysinv. The other basic openstack services are visible if I run kubectl get services -n openstack But, there are no endpoints of nova,neutron,glance and all other openstack services? Issue 2 :- I am unable to proceed further with the set up Provider/tenant networking setup [root at controller-0 sysadmin(keystone_admin)]# neutron providernet-create ${PHYSNET0} --type vlan neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Unknown command [u'providernet-create', u'--type', u'vlan'] What could be the solution to proceed further? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Mon Jul 22 02:27:19 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Mon, 22 Jul 2019 10:27:19 +0800 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Message-ID: Hi Chenjie, I have attached those logs last email, I don’t know why you didn’t get them. I will attach it(logs.tar) again, if you cannot find it, tell me in time. Thanks Kunpeng > On Jul 19, 2019, at 16:17, Xu, Chenjie wrote: > > Hi Kunpeng, > From the below logs, we can find that > 1. ovs agent detects that the OVS is dead. > 2. After OVS has been restarted, ovs agent tries to reset bridges and recover ports. > 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. <> > 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 > 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. > > Could you please attach the below logs? > /var/log/openvswitch/ovs-vswitchd.log > /var/log/openvswitch/ovsdb-server.log > /var/log/syslog > neutron log (the log file is specified in /etc/neutron/neutron.conf) > > Best Regards, > Xu, Chenjie > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Friday, July 19, 2019 10:21 AM > To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > > This is the logs. At UTC 2019:02:04 I restarted the VM. In openstack.log I found some error messages, I don’t know if it’s relevant. > > 2019-07-19 02:04:16.141 186477 INFO eventlet.wsgi.server [req-6600ca74-1f93-4e54-88c1-35f964f1e055 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0178909 > 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json]: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) > 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) > 2019-07-19 02:04:22.077 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused > 2019-07-19 02:04:22.079 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) > 2019-07-19 02:04:22.089 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused > 2019-07-19 02:04:22.090 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out: Timeout: 10 seconds > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Failed to communicate with the switch: RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback (most recent call last): > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int flows = self.dump_flows(constants.CANARY_TABLE) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 147, in dump_flows > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int reply_multi=True) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 95, in _send_msg > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int raise RuntimeError(m) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int > 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. > 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 > 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. > 2019-07-19 02:04:24.242 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Mapping physical network providernet-a to bridge br-phy0 > 2019-07-19 02:04:24.295 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Bridge br-phy0 has datapath-ID 0000f8f21e640120 > > Kunpeng > > > On Jul 19, 2019, at 09:29, Xu, Chenjie > wrote: > > Hi Kunpeng, > You can check the bridge and openflows by the following commands: > ovs-vsctl show > ovs-ofctl dump-flows br-int <> > ovs-ofctl dump-flows br-phy0 > > The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs. > > I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted? > /var/log/openvswitch/ovs-vswitchd.log > /var/log/syslog > neutron log (the log file is specified in /etc/neutron/neutron.conf) > > Best Regards, > Xu, Chenjie > > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Thursday, July 18, 2019 7:09 PM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. > > Before restart ovs: > > controller-0:/home/wrsroot# ip a > 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet 127.168.204.3/24 brd 127.168.204.255 scope host lo > valid_lft forever preferred_lft forever > inet 169.254.202.2/24 scope global lo > valid_lft forever preferred_lft forever > inet 127.168.204.2/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.5/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.6/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.152/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff > 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff > inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 > valid_lft forever preferred_lft forever > inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link > valid_lft forever preferred_lft forever > 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff > 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff > 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff > 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8060/64 scope link > valid_lft forever preferred_lft forever > 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8061/64 scope link > valid_lft forever preferred_lft forever > 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff > 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff > 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:120/64 scope link > valid_lft forever preferred_lft forever > 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 > link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff > inet6 fe80::7c3a:87ff:fea3:9803/64 scope link > valid_lft forever preferred_lft forever > > After: > > controller-0:/home/wrsroot# systemctl restart ovs-vswitchd > controller-0:/home/wrsroot# ip a > 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet 127.168.204.3/24 brd 127.168.204.255 scope host lo > valid_lft forever preferred_lft forever > inet 169.254.202.2/24 scope global lo > valid_lft forever preferred_lft forever > inet 127.168.204.2/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.5/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.6/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.152/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff > 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff > inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 > valid_lft forever preferred_lft forever > inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link > valid_lft forever preferred_lft forever > 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff > 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff > 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff > 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8060/64 scope link > valid_lft forever preferred_lft forever > 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8061/64 scope link > valid_lft forever preferred_lft forever > 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff > 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff > 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 > link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:120/64 scope link > valid_lft forever preferred_lft forever > 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 > link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff > inet6 fe80::7c3a:87ff:fea3:9803/64 scope link > valid_lft forever preferred_lft forever > 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff > 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff > > One of the VMs: > > > > > On Jul 18, 2019, at 18:09, 张鲲鹏 > wrote: > > Hi Xu,Chenjie > > I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. > Also, it was same when I reboot the VM with 2 SR-IOV VFs. > Do you have any ideas to debug this problem? > > Thanks > Kunpeng > > > On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: > > Hi Kunpeng, > Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. > > Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: > system modify --vswitch_type none > > Best Regards, > Xu, Chenjie > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Tuesday, July 16, 2019 5:40 PM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > > Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. > And do you have some easy ways to disable dpdk of openvswitch in stx1.0? > I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. > > Thanks > Kunpeng > > On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: > > Hi Kunpeng, > When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. > > Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: > 2 network port without DPDK > VM > 2 network port with DPDK > Data Network > 1 network port without DPDK > OAM > > Best Regards, > Xu, Chenjie > > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Tuesday, July 16, 2019 3:54 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi guys, > > Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. > > Below is the StarlingX environment. > > 1. stx1.0 version, bootimage[1] > 2. Simplex deployment > 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. > 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. > > When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. > > It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. > > I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. > > [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso > > Kunpeng > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: log.tar Type: application/x-tar Size: 239317 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Mon Jul 22 03:31:11 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 22 Jul 2019 03:31:11 +0000 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Message-ID: Hi Kunpeng, Sorry for not seeing logs.rar. The following logs in openvswitch/ovs-vswitchd.log show that ovs-vswitchd is restarted but doesn’t show why ovs-vswitchd is restarted: 2019-07-18T12:29:59.948Z|00286|connmgr|INFO|br-phy0<->unix#9: 1 flow_mods in the last 0 s (1 adds) 2019-07-19T02:04:11.973Z|00151|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE 2019-07-19T02:04:21.273Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2019-07-19T02:04:21.277Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 2019-07-19T02:04:21.277Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 2019-07-19T02:04:21.277Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores 2019-07-19T02:04:21.277Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2019-07-19T02:04:21.277Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2019-07-19T02:04:21.279Z|00007|dpdk|INFO|Using DPDK 17.11.0 2019-07-19T02:04:21.279Z|00008|dpdk|INFO|DPDK Enabled - initializing... The syslog doesn’t contain the logs for 2019-07-19. Could you please collect those part log? I will try to reproduce this bug on StarlingX 2.0 and will let you know the result. Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Monday, July 22, 2019 10:27 AM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, I have attached those logs last email, I don’t know why you didn’t get them. I will attach it(logs.tar) again, if you cannot find it, tell me in time. Thanks Kunpeng On Jul 19, 2019, at 16:17, Xu, Chenjie > wrote: Hi Kunpeng, From the below logs, we can find that 1. ovs agent detects that the OVS is dead. 2. After OVS has been restarted, ovs agent tries to reset bridges and recover ports. 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. Could you please attach the below logs? /var/log/openvswitch/ovs-vswitchd.log /var/log/openvswitch/ovsdb-server.log /var/log/syslog neutron log (the log file is specified in /etc/neutron/neutron.conf) Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Friday, July 19, 2019 10:21 AM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, This is the logs. At UTC 2019:02:04 I restarted the VM. In openstack.log I found some error messages, I don’t know if it’s relevant. 2019-07-19 02:04:16.141 186477 INFO eventlet.wsgi.server [req-6600ca74-1f93-4e54-88c1-35f964f1e055 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0178909 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json]: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:22.077 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.079 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:22.089 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.090 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out: Timeout: 10 seconds 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Failed to communicate with the switch: RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback (most recent call last): 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int flows = self.dump_flows(constants.CANARY_TABLE) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 147, in dump_flows 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int reply_multi=True) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 95, in _send_msg 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int raise RuntimeError(m) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. 2019-07-19 02:04:24.242 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Mapping physical network providernet-a to bridge br-phy0 2019-07-19 02:04:24.295 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Bridge br-phy0 has datapath-ID 0000f8f21e640120 Kunpeng On Jul 19, 2019, at 09:29, Xu, Chenjie > wrote: Hi Kunpeng, You can check the bridge and openflows by the following commands: ovs-vsctl show ovs-ofctl dump-flows br-int ovs-ofctl dump-flows br-phy0 The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs. I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted? /var/log/openvswitch/ovs-vswitchd.log /var/log/syslog neutron log (the log file is specified in /etc/neutron/neutron.conf) Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Thursday, July 18, 2019 7:09 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. Before restart ovs: controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever After: controller-0:/home/wrsroot# systemctl restart ovs-vswitchd controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff One of the VMs: On Jul 18, 2019, at 18:09, 张鲲鹏 > wrote: Hi Xu,Chenjie I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. Also, it was same when I reboot the VM with 2 SR-IOV VFs. Do you have any ideas to debug this problem? Thanks Kunpeng On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: Hi Kunpeng, Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: system modify --vswitch_type none Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 5:40 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. And do you have some easy ways to disable dpdk of openvswitch in stx1.0? I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. Thanks Kunpeng On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: Hi Kunpeng, When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: 2 network port without DPDK > VM 2 network port with DPDK > Data Network 1 network port without DPDK > OAM Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 3:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi guys, Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. Below is the StarlingX environment. 1. stx1.0 version, bootimage[1] 2. Simplex deployment 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso Kunpeng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Mon Jul 22 06:28:49 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Mon, 22 Jul 2019 14:28:49 +0800 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Message-ID: Hi Chenjie, Actually the syslog logged nothing when I restart the VM. And also the ovs-vswitchd didn’t log the reason why the ovs restarted, so it is difficult to debug. I don’t know if the stx 2.0 will reproduce this bug, but stx 1.0 can be reproduced stably. Thanks Kunpeng > On Jul 22, 2019, at 11:31, Xu, Chenjie wrote: > > Hi Kunpeng, > Sorry for not seeing logs.rar. The following logs in openvswitch/ovs-vswitchd.log show that ovs-vswitchd is restarted but doesn’t show why ovs-vswitchd is restarted: > 2019-07-18T12:29:59.948Z|00286|connmgr|INFO|br-phy0<->unix#9: 1 flow_mods in the last 0 s (1 adds) > 2019-07-19T02:04:11.973Z|00151|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE > 2019-07-19T02:04:21.273Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log > 2019-07-19T02:04:21.277Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 > 2019-07-19T02:04:21.277Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 > 2019-07-19T02:04:21.277Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores > 2019-07-19T02:04:21.277Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... > 2019-07-19T02:04:21.277Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected > 2019-07-19T02:04:21.279Z|00007|dpdk|INFO|Using DPDK 17.11.0 > 2019-07-19T02:04:21.279Z|00008|dpdk|INFO|DPDK Enabled - initializing... >   <> > The syslog doesn’t contain the logs for 2019-07-19. Could you please collect those part log? > > I will try to reproduce this bug on StarlingX 2.0 and will let you know the result. > > Best Regards, > Xu, Chenjie > > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Monday, July 22, 2019 10:27 AM > To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > I have attached those logs last email, I don’t know why you didn’t get them. I will attach it(logs.tar) again, if you cannot find it, tell me in time. > > Thanks > Kunpeng > > > On Jul 19, 2019, at 16:17, Xu, Chenjie > wrote: > > Hi Kunpeng, > From the below logs, we can find that > 1. ovs agent detects that the OVS is dead. > 2. After OVS has been restarted, ovs agent tries to reset bridges and recover ports. > 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. <> > 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 > 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. > > Could you please attach the below logs? > /var/log/openvswitch/ovs-vswitchd.log > /var/log/openvswitch/ovsdb-server.log > /var/log/syslog > neutron log (the log file is specified in /etc/neutron/neutron.conf) > > Best Regards, > Xu, Chenjie > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Friday, July 19, 2019 10:21 AM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > > This is the logs. At UTC 2019:02:04 I restarted the VM. In openstack.log I found some error messages, I don’t know if it’s relevant. > > 2019-07-19 02:04:16.141 186477 INFO eventlet.wsgi.server [req-6600ca74-1f93-4e54-88c1-35f964f1e055 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0178909 > 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json]: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) > 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) > 2019-07-19 02:04:22.077 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused > 2019-07-19 02:04:22.079 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) > 2019-07-19 02:04:22.089 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused > 2019-07-19 02:04:22.090 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out: Timeout: 10 seconds > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Failed to communicate with the switch: RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback (most recent call last): > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int flows = self.dump_flows(constants.CANARY_TABLE) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 147, in dump_flows > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int reply_multi=True) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 95, in _send_msg > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int raise RuntimeError(m) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int > 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. > 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 > 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. > 2019-07-19 02:04:24.242 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Mapping physical network providernet-a to bridge br-phy0 > 2019-07-19 02:04:24.295 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Bridge br-phy0 has datapath-ID 0000f8f21e640120 > > Kunpeng > > > On Jul 19, 2019, at 09:29, Xu, Chenjie > wrote: > > Hi Kunpeng, > You can check the bridge and openflows by the following commands: > ovs-vsctl show > ovs-ofctl dump-flows br-int <> > ovs-ofctl dump-flows br-phy0 > > The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs. > > I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted? > /var/log/openvswitch/ovs-vswitchd.log > /var/log/syslog > neutron log (the log file is specified in /etc/neutron/neutron.conf) > > Best Regards, > Xu, Chenjie > > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Thursday, July 18, 2019 7:09 PM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. > > Before restart ovs: > > controller-0:/home/wrsroot# ip a > 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet 127.168.204.3/24 brd 127.168.204.255 scope host lo > valid_lft forever preferred_lft forever > inet 169.254.202.2/24 scope global lo > valid_lft forever preferred_lft forever > inet 127.168.204.2/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.5/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.6/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.152/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff > 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff > inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 > valid_lft forever preferred_lft forever > inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link > valid_lft forever preferred_lft forever > 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff > 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff > 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff > 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8060/64 scope link > valid_lft forever preferred_lft forever > 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8061/64 scope link > valid_lft forever preferred_lft forever > 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff > 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff > 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:120/64 scope link > valid_lft forever preferred_lft forever > 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 > link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff > inet6 fe80::7c3a:87ff:fea3:9803/64 scope link > valid_lft forever preferred_lft forever > > After: > > controller-0:/home/wrsroot# systemctl restart ovs-vswitchd > controller-0:/home/wrsroot# ip a > 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet 127.168.204.3/24 brd 127.168.204.255 scope host lo > valid_lft forever preferred_lft forever > inet 169.254.202.2/24 scope global lo > valid_lft forever preferred_lft forever > inet 127.168.204.2/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.5/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.6/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.152/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff > 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff > inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 > valid_lft forever preferred_lft forever > inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link > valid_lft forever preferred_lft forever > 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff > 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff > 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff > 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8060/64 scope link > valid_lft forever preferred_lft forever > 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8061/64 scope link > valid_lft forever preferred_lft forever > 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff > 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff > 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 > link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:120/64 scope link > valid_lft forever preferred_lft forever > 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 > link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff > inet6 fe80::7c3a:87ff:fea3:9803/64 scope link > valid_lft forever preferred_lft forever > 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff > 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff > > One of the VMs: > > > > > On Jul 18, 2019, at 18:09, 张鲲鹏 > wrote: > > Hi Xu,Chenjie > > I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. > Also, it was same when I reboot the VM with 2 SR-IOV VFs. > Do you have any ideas to debug this problem? > > Thanks > Kunpeng > > > On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: > > Hi Kunpeng, > Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. > > Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: > system modify --vswitch_type none > > Best Regards, > Xu, Chenjie > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Tuesday, July 16, 2019 5:40 PM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > > Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. > And do you have some easy ways to disable dpdk of openvswitch in stx1.0? > I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. > > Thanks > Kunpeng > > On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: > > Hi Kunpeng, > When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. > > Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: > 2 network port without DPDK > VM > 2 network port with DPDK > Data Network > 1 network port without DPDK > OAM > > Best Regards, > Xu, Chenjie > > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Tuesday, July 16, 2019 3:54 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi guys, > > Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. > > Below is the StarlingX environment. > > 1. stx1.0 version, bootimage[1] > 2. Simplex deployment > 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. > 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. > > When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. > > It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. > > I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. > > [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso > > Kunpeng > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Mon Jul 22 14:52:11 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Mon, 22 Jul 2019 14:52:11 +0000 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In-Reply-To: References: Message-ID: In order to see the endpoints for nova, neutron, etc.. you need /etc/openstack/clouds.yaml file to be setup. The steps you need are referenced in this (deprecated) document. https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints It will also show the appropriate commands for the provider/tenant networking setup, until the official doc is synced with that wiki. Al From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Friday, July 19, 2019 12:13 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Hi Team, I am following the below document to set up AIO-Simplex R2.0 with the green build dated 17-July. https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html I have successfully verified the endpoints, using the command openstack endpoint list Issue 1 :- The endpoint list contains endpoint of the services fm,patching,vim,smapi,keystone,barbacian and sysinv. The other basic openstack services are visible if I run kubectl get services -n openstack But, there are no endpoints of nova,neutron,glance and all other openstack services? Issue 2 :- I am unable to proceed further with the set up Provider/tenant networking setup [root at controller-0 sysadmin(keystone_admin)]# neutron providernet-create ${PHYSNET0} --type vlan neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Unknown command [u'providernet-create', u'--type', u'vlan'] What could be the solution to proceed further? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominig.arfoll at fridu.net Mon Jul 22 16:12:48 2019 From: dominig.arfoll at fridu.net (Dominig ar Foll (Intel Open Source)) Date: Mon, 22 Jul 2019 18:12:48 +0200 Subject: [Starlingx-discuss] Building openstack-helm under OpenSUSE OBS after option serve has been removed from helm fails Message-ID: <5f470fa9-3de3-eefe-dc2e-e0641dbb7f0b@fridu.net> Hello, The building process of the package  openstack-helm under the OpenSUSE OBS is expecting the option 'serve' of the helm command to be available. The option 'serve' has been removed from helm (commit  b8eb479a4f3b1934770c75ed9013dfe263d8940f from April 2018) and that change has now surfaced in OpenSUSE packages. Would some be able to indicate a viable alternative to the mode "helm serve --local-repo"  -- Dominig ar Foll Senior Software Architect Intel Open Source Technology Centre From marcela.a.rosales.jimenez at intel.com Mon Jul 22 21:39:58 2019 From: marcela.a.rosales.jimenez at intel.com (Rosales Jimenez, Marcela A) Date: Mon, 22 Jul 2019 21:39:58 +0000 Subject: [Starlingx-discuss] Multi-OS team meeting : Notes of the meeting: 7/22/19 Message-ID: Multi-OS team meeting Summary of the meeting: 7/22/19 * openSUSE * Update * We had some problems starting some services: Initial experiment with pmon. * openSUSE doesn’t have initscripts rpm https://opendev.org/starlingx/metal/src/branch/master/mtce/src/pmon/scripts/pmon#L20 * Services files are using a sysv script https://opendev.org/starlingx/metal/src/branch/master/mtce/src/pmon/scripts/pmon.service * GDC team is working on an enabling plan for StarlingX services, what to enable first and identify dependencies. * Discussion * Prioritize sysv to systemd conversion * Testing would be create and image, verify the service can be started or stopped, and run sanity test if needed. * Do not send patches until branch is split. * Yocto * Update * 1016 packages needed and 476 missing packages * Discussion * Suggestion to check layers.openembedded.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Jul 22 23:30:19 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 22 Jul 2019 19:30:19 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 189 - Failure! Message-ID: <1829751717.17.1563838220234.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 189 Status: Failure Timestamp: 20190722T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190722T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From maria.g.perez.ibarra at intel.com Mon Jul 22 23:49:15 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 22 Jul 2019 23:49:15 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Yang at windriver.com Tue Jul 23 00:52:47 2019 From: Bin.Yang at windriver.com (Yang, Bin) Date: Tue, 23 Jul 2019 00:52:47 +0000 Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Message-ID: Dear experts, I have been trying to install stx milestone3 over virtualbox with standard modes, I managed to install 2 controller nodes and 1 worker node, and provisioned them according to the instructions on wiki. Then I uploaded openstack helm charts then apply it, then it failed to accomplish that operation. As I investigate the root cause, I found out that the worker node is not in ready status: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl get nodes NAME STATUS ROLES AGE VERSION compute-0 NotReady 3h32m v1.13.5 controller-0 Ready master 14d v1.13.5 controller-1 Ready master 13d v1.13.5 The root cause is that worker node is lacking of access to external network directly, hence unable to pull docker image via the proxy: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl -n kube-system describe pods kube-proxy-pftk8 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 38s (x462 over 3h36m) kubelet, compute-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) compute-0:~# docker pull k8s.gcr.io/pause:3.1 Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) The traceroute to the docker proxy : compute-0:~# cat /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://128.224.230.5:9090" Environment="HTTPS_PROXY=http://128.224.230.5:9090" Environment="NO_PROXY=localhost,127.0.0.1,registry.local,192.168.204.2,192.168.204.3,10.0.2.25,10.0.2.26,192.168.204.4,10.0.2.27" compute-0:~# traceroute 128.224.230.5 traceroute to 128.224.230.5 (128.224.230.5), 30 hops max, 60 byte packets 1 controller-0 (192.168.204.3) 0.200 ms 0.256 ms 0.208 ms 2 * * * 3 * * * Can anybody help explain how a worker node without access to external network could pull docker images? How can I workaround this issue? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Tuesday, July 23, 2019 7:49 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jul 23 02:32:11 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 23 Jul 2019 02:32:11 +0000 Subject: [Starlingx-discuss] About sanity test block issue - Create instance from Image or from Volume fails Message-ID: <93814834B4855241994F290E959305C7530AF337@SHSMSX104.ccr.corp.intel.com> Hi Boxiang and all, https://bugs.launchpad.net/starlingx/+bug/1833622 This issue should be a regression issue. Pls see above LP link for more info. In test log, it shows below error. fault message: u'Build of instance ab3dfe11-3ac3-4a26-a1ce-3bff1e2ca78b aborted: Image 69e1593f-4d03-4cd5-9779-8ffc1d227da0 is unacceptable: Converted to raw, but format is now qcow2' In 20190719T013000Z/outputs/CHANGELOG.txt I saw we merged https://review.opendev.org/#/c/661512/ ================================================================== Use true for force_raw_images when using ceph image backend We need this patch for two reasons: Nova of starlingx has not this patch[0]. We use remote storage(ceph) as nova backend. If we set force_raw_image to False and use qcow2 format image to boot vms, the vms will fail to boot. Nova of starlingx will have this patch[0]. If we still use False for force_raw_images, the nova-compute service will refuse to start. So that, we must set this force_raw_images to True at all. [0] https://review.opendev.org/#/c/640271/ ================================================================== However, I double checked our current stein.2 nova, [0] is not included. This is the problem! I have 2 proposals below: 1) Revert 640271 to unblock sanity test first 2) Cherry pick [0] to nova stein.2 Any comment? Thanks! Zhipeng From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: 2019年7月23日 7:49 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jul 23 03:15:51 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 23 Jul 2019 03:15:51 +0000 Subject: [Starlingx-discuss] About sanity test block issue - Create instance from Image or from Volume fails In-Reply-To: <93814834B4855241994F290E959305C7530AF337@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C7530AF337@SHSMSX104.ccr.corp.intel.com> Message-ID: <93814834B4855241994F290E959305C7530AF378@SHSMSX104.ccr.corp.intel.com> Sorry for pasting a wrong LP link in my last mail, correct it! From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: 2019年7月23日 10:32 To: Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: [Starlingx-discuss] About sanity test block issue - Create instance from Image or from Volume fails Hi Boxiang and all, https://bugs.launchpad.net/starlingx/+bug/1837241 This issue should be a regression issue. Pls see above LP link for more info. In test log, it shows below error. fault message: u'Build of instance ab3dfe11-3ac3-4a26-a1ce-3bff1e2ca78b aborted: Image 69e1593f-4d03-4cd5-9779-8ffc1d227da0 is unacceptable: Converted to raw, but format is now qcow2' In 20190719T013000Z/outputs/CHANGELOG.txt I saw we merged https://review.opendev.org/#/c/661512/ ================================================================== Use true for force_raw_images when using ceph image backend We need this patch for two reasons: Nova of starlingx has not this patch[0]. We use remote storage(ceph) as nova backend. If we set force_raw_image to False and use qcow2 format image to boot vms, the vms will fail to boot. Nova of starlingx will have this patch[0]. If we still use False for force_raw_images, the nova-compute service will refuse to start. So that, we must set this force_raw_images to True at all. [0] https://review.opendev.org/#/c/640271/ ================================================================== However, I double checked our current stein.2 nova, [0] is not included. This is the problem! I have 2 proposals below: 1) Revert 640271 to unblock sanity test first 2) Cherry pick [0] to nova stein.2 Any comment? Thanks! Zhipeng From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: 2019年7月23日 7:49 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jul 23 03:22:21 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 23 Jul 2019 03:22:21 +0000 Subject: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC152D3B5@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FC152D0F3@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C7530AF17A@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC152D3B5@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C7530AF38E@SHSMSX104.ccr.corp.intel.com> Hi Don, ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova$ python ./setup.py --version /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) Installed /home/wrsroot/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova/.eggs/pbr-5.4.1-py2.7.egg [pbr] Generating ChangeLog 19.0.1.dev117 Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月22日 21:59 To: Liu, ZhipengS ; Dean Troyer ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Hi Zhipeng, Can you verify that you can run the following in your cloned repo? [/localdisk/loadbuild/dpenney/nova-update/stx-nova]$ python ./setup.py --version 19.0.1.dev116 -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Monday, July 22, 2019 3:15 AM To: Penney, Don; Dean Troyer; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Hi Don, I failed to update image. Could you give me some help, thanks! ~/starlingx/cgcs-root/build-tools/build-docker-images$ bash update-stx-image.sh \ > --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ > --module-src > ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nov > a --user zhipengl ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: wget http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar --2019-07-22 02:57:20-- http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar Resolving mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)... 135.84.104.40 Connecting to mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)|135.84.104.40|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 63354880 (60M) [application/octet-stream] Saving to: 'stx-centos-stable-wheels.tar' stx-centos-stable-wheels.tar 100%[===========================================================================>] 60.42M 7.41MB/s in 14s 2019-07-22 02:57:34 (4.42 MB/s) - 'stx-centos-stable-wheels.tar' saved [63354880/63354880] ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/modules/stx-nova ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) sed: -e expression #1, char 20: unterminated `s' command ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: docker image pull starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 master-centos-stable-20190715T233000Z.0: Pulling from starlingx/stx-nova 5ad559c5ae16: Already exists 3d7ee0b84e39: Pull complete e1047bfd73cf: Pull complete 16f151ad544f: Pull complete 844126c15d7e: Pull complete 869460797821: Pull complete 0c319da8f09d: Pull complete 2e884702ad07: Pull complete Digest: sha256:4032358f3ab208e76c2737854fdcf4f7bd582ef8f91e150193ded8feb662fdbe Status: Downloaded newer image for starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 + bash -x /image-update/internal-update-stx-image.sh + UPDATES_DIR=/image-update + PIP_PACKAGES_DIR=/image-update/pip-packages + DIST_PACKAGES_DIR=/image-update/dist-packages + CUSTOMIZATION_SCRIPT=/image-update/customize.sh ++ source /etc/os-release +++ NAME='CentOS Linux' +++ VERSION='7 (Core)' +++ ID=centos +++ ID_LIKE='rhel fedora' +++ VERSION_ID=7 +++ PRETTY_NAME='CentOS Linux 7 (Core)' +++ ANSI_COLOR='0;31' +++ CPE_NAME=cpe:/o:centos:centos:7 +++ HOME_URL=https://www.centos.org/ +++ BUG_REPORT_URL=https://bugs.centos.org/ +++ CENTOS_MANTISBT_PROJECT=CentOS-7 +++ CENTOS_MANTISBT_PROJECT_VERSION=7 +++ REDHAT_SUPPORT_PRODUCT=centos +++ REDHAT_SUPPORT_PRODUCT_VERSION=7 ++ echo CentOS Linux + OS_NAME='CentOS Linux' ++ getopt -o h -l help: -- + OPTS=' --' + '[' 0 -ne 0 ']' + eval set -- ' --' ++ set -- -- + true + case $1 in + shift + break + install_dist_packages + local -i file_count=0 ++ find /image-update/dist-packages -type f wc -l + file_count=0 + '[' 0 -eq 0 ']' + return 0 + install_pip_packages + local modules + local wheels ++ find /image-update/pip-packages/modules/stx-nova -maxdepth 0 -type d + modules=/image-update/pip-packages/modules/stx-nova ++ find /image-update/pip-packages/wheels/ -type f -name '*.whl' + wheels= + '[' -z /image-update/pip-packages/modules/stx-nova -a -z '' ']' + pip install -vvv --no-deps --no-index --pre --no-cache-dir + --only-binary :all: --no-compile --force-reinstall + /image-update/pip-packages/modules/stx-nova DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Ignoring indexes: https://pypi.org/simple Created temporary directory: /tmp/pip-ephem-wheel-cache-Pld6sg Created temporary directory: /tmp/pip-req-tracker-5OQGeX Created requirements tracker '/tmp/pip-req-tracker-5OQGeX' Created temporary directory: /tmp/pip-install-8tWna2 Processing /image-update/pip-packages/modules/stx-nova Created temporary directory: /tmp/pip-req-build-EngJ2Q Added file:///image-update/pip-packages/modules/stx-nova to build tracker '/tmp/pip-req-tracker-5OQGeX' Running setup.py (path:/tmp/pip-req-build-EngJ2Q/setup.py) egg_info for package from file:///image-update/pip-packages/modules/stx-nova Running command python setup.py egg_info ERROR:root:Error parsing Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pbr/core.py", line 96, in pbr attrs = util.cfg_to_args(path, dist.script_args) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/util.py", line 256, in cfg_to_args pbr.hooks.setup_hook(config) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/__init__.py", line 25, in setup_hook metadata_config.run() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run self.hook() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook self.config['name'], self.config.get('version', None)) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/packaging.py", line 849, in get_version name=package_name)) Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. error in setup command: Error parsing /tmp/pip-req-build-EngJ2Q/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. Cleaning up... Removing source in /tmp/pip-req-build-EngJ2Q Removed file:///image-update/pip-packages/modules/stx-nova from build tracker '/tmp/pip-req-tracker-5OQGeX' Removed build tracker '/tmp/pip-req-tracker-5OQGeX' ERROR: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ Exception information: Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 352, in run resolver.resolve(requirement_set) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 131, in resolve self._resolve_one(requirement_set, req) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 294, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 242, in _get_abstract_dist_for self.require_hashes File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 362, in prepare_linked_requirement abstract_dist.prep_for_dist(finder, self.build_isolation) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 171, in prep_for_dist self.req.prepare_metadata() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 537, in prepare_metadata self.run_egg_info() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 615, in run_egg_info command_desc='python setup.py egg_info') File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/utils/misc.py", line 776, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ + '[' 1 -ne 0 ']' + echo 'Failed pip install' Failed pip install + exit 1 Failed to update image: starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月20日 2:04 To: Dean Troyer ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX For updating an image for testing, take a look at: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Incremental_Image_Updates As Dean notes, clone the repo and cherry-pick the commit, and then do something like: time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src ${path_to_cloned_repo}/stx-nova ie: # clone stx-nova, get stx/stein.2 cd /localdisk/loadbuild/dpenney/ mkdir nova-update cd nova-update/ git clone https://github.com/starlingx-staging/stx-nova.git cd stx-nova/ git fetch https://github.com/starlingx-staging/stx-nova.git stx/stein.2 git checkout FETCH_HEAD # cherry-pick update git fetch https://review.opendev.org/openstack/nova refs/changes/69/651969/13 && git cherry-pick FETCH_HEAD # Fix up conflicts, etc # Build updated image, from 20190715T233000Z build as base, # specifying cloned/modified repo time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src /localdisk/loadbuild/dpenney/nova-update/stx-nova \ --user dpenney This produces an updated image in the local registry: Updated image: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 If you also set a registry with --registry and use the --push option, the command will push the updated image to that registry. This also produces an image record file: $ cat ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 Which you can pass as an argument to build-helm-charts.sh when building your application tarball for testing: build-helm-charts.sh \ --image-record http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/docker-images/images-centos-stable-versioned.lst \ --image-record ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst \ --label centos-stable-versioned If you look at the yaml file in the tarball, you can see that it now references the updated image: $ tar xzf ${MY_WORKSPACE}/std/build-helm/stx/stx-openstack-1.0-17-centos-stable-versioned.tgz -O ./stx-openstack.yaml | grep stx-nova: nova_api: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_cell_setup: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ironic: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ssh: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_conductor: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_consoleauth: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_db_sync: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_novncproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_placement: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_scheduler: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy_assets: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, July 19, 2019 1:08 PM To: starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX On Fri, Jul 19, 2019 at 10:57 AM Yong Hu wrote: > For LP[0], there is a patch (https://review.opendev.org/#/c/651969/) > in Nova upstream, what's the method/process for StarlingX to > cherry-pick it for testing a LP reported in StarlingX? > > [0]:https://bugs.launchpad.net/starlingx/+bug/1820882 At a high level you would clone the stx-nova repo stx/stein.2 branch [0] and cherry-pick/backport the commit you want to test, and rebuild the Nova docker image and test that. I do have some Zuul jobs in starlingx/tis-repo that will pull that Nova branch and run the unit, functional and pep8 tox jobs to test in OpenStack CI. Given that the stx/stein branches also carry the NUMA live migration patches you would probably want to do whatever other live migration testing would be done upstream to validate this in the StarlingX context. dt [0] https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Tue Jul 23 04:06:20 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 23 Jul 2019 04:06:20 +0000 Subject: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX In-Reply-To: <93814834B4855241994F290E959305C7530AF38E@SHSMSX104.ccr.corp.intel.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FC152D0F3@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C7530AF17A@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC152D3B5@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C7530AF38E@SHSMSX104.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC152D75E@ALA-MBD.corp.ad.wrs.com> The extra text in the output of your --version vs mine is concerning, and I'm thinking that's getting picked up by the script when the hardcode_python_module_version function runs and causing some corruption. It appears to be INFO logs, which don't get printed to stdout by default on our systems. The function is expecting the only thing on stdout when running the --version command is the version itself. Is there some python config file on your host that's setting the default logging level to logging.INFO or logging.DEBUG maybe, instead of logging.WARN? -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Monday, July 22, 2019 11:22 PM To: Penney, Don; Dean Troyer; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Hi Don, ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova$ python ./setup.py --version /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) Installed /home/wrsroot/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova/.eggs/pbr-5.4.1-py2.7.egg [pbr] Generating ChangeLog 19.0.1.dev117 Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月22日 21:59 To: Liu, ZhipengS ; Dean Troyer ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Hi Zhipeng, Can you verify that you can run the following in your cloned repo? [/localdisk/loadbuild/dpenney/nova-update/stx-nova]$ python ./setup.py --version 19.0.1.dev116 -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Monday, July 22, 2019 3:15 AM To: Penney, Don; Dean Troyer; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Hi Don, I failed to update image. Could you give me some help, thanks! ~/starlingx/cgcs-root/build-tools/build-docker-images$ bash update-stx-image.sh \ > --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ > --module-src > ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nov > a --user zhipengl ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: wget http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar --2019-07-22 02:57:20-- http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar Resolving mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)... 135.84.104.40 Connecting to mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)|135.84.104.40|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 63354880 (60M) [application/octet-stream] Saving to: 'stx-centos-stable-wheels.tar' stx-centos-stable-wheels.tar 100%[===========================================================================>] 60.42M 7.41MB/s in 14s 2019-07-22 02:57:34 (4.42 MB/s) - 'stx-centos-stable-wheels.tar' saved [63354880/63354880] ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/modules/stx-nova ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) sed: -e expression #1, char 20: unterminated `s' command ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: docker image pull starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 master-centos-stable-20190715T233000Z.0: Pulling from starlingx/stx-nova 5ad559c5ae16: Already exists 3d7ee0b84e39: Pull complete e1047bfd73cf: Pull complete 16f151ad544f: Pull complete 844126c15d7e: Pull complete 869460797821: Pull complete 0c319da8f09d: Pull complete 2e884702ad07: Pull complete Digest: sha256:4032358f3ab208e76c2737854fdcf4f7bd582ef8f91e150193ded8feb662fdbe Status: Downloaded newer image for starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 + bash -x /image-update/internal-update-stx-image.sh + UPDATES_DIR=/image-update + PIP_PACKAGES_DIR=/image-update/pip-packages + DIST_PACKAGES_DIR=/image-update/dist-packages + CUSTOMIZATION_SCRIPT=/image-update/customize.sh ++ source /etc/os-release +++ NAME='CentOS Linux' +++ VERSION='7 (Core)' +++ ID=centos +++ ID_LIKE='rhel fedora' +++ VERSION_ID=7 +++ PRETTY_NAME='CentOS Linux 7 (Core)' +++ ANSI_COLOR='0;31' +++ CPE_NAME=cpe:/o:centos:centos:7 +++ HOME_URL=https://www.centos.org/ +++ BUG_REPORT_URL=https://bugs.centos.org/ +++ CENTOS_MANTISBT_PROJECT=CentOS-7 +++ CENTOS_MANTISBT_PROJECT_VERSION=7 +++ REDHAT_SUPPORT_PRODUCT=centos +++ REDHAT_SUPPORT_PRODUCT_VERSION=7 ++ echo CentOS Linux + OS_NAME='CentOS Linux' ++ getopt -o h -l help: -- + OPTS=' --' + '[' 0 -ne 0 ']' + eval set -- ' --' ++ set -- -- + true + case $1 in + shift + break + install_dist_packages + local -i file_count=0 ++ find /image-update/dist-packages -type f wc -l + file_count=0 + '[' 0 -eq 0 ']' + return 0 + install_pip_packages + local modules + local wheels ++ find /image-update/pip-packages/modules/stx-nova -maxdepth 0 -type d + modules=/image-update/pip-packages/modules/stx-nova ++ find /image-update/pip-packages/wheels/ -type f -name '*.whl' + wheels= + '[' -z /image-update/pip-packages/modules/stx-nova -a -z '' ']' + pip install -vvv --no-deps --no-index --pre --no-cache-dir + --only-binary :all: --no-compile --force-reinstall + /image-update/pip-packages/modules/stx-nova DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Ignoring indexes: https://pypi.org/simple Created temporary directory: /tmp/pip-ephem-wheel-cache-Pld6sg Created temporary directory: /tmp/pip-req-tracker-5OQGeX Created requirements tracker '/tmp/pip-req-tracker-5OQGeX' Created temporary directory: /tmp/pip-install-8tWna2 Processing /image-update/pip-packages/modules/stx-nova Created temporary directory: /tmp/pip-req-build-EngJ2Q Added file:///image-update/pip-packages/modules/stx-nova to build tracker '/tmp/pip-req-tracker-5OQGeX' Running setup.py (path:/tmp/pip-req-build-EngJ2Q/setup.py) egg_info for package from file:///image-update/pip-packages/modules/stx-nova Running command python setup.py egg_info ERROR:root:Error parsing Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pbr/core.py", line 96, in pbr attrs = util.cfg_to_args(path, dist.script_args) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/util.py", line 256, in cfg_to_args pbr.hooks.setup_hook(config) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/__init__.py", line 25, in setup_hook metadata_config.run() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run self.hook() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook self.config['name'], self.config.get('version', None)) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/packaging.py", line 849, in get_version name=package_name)) Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. error in setup command: Error parsing /tmp/pip-req-build-EngJ2Q/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. Cleaning up... Removing source in /tmp/pip-req-build-EngJ2Q Removed file:///image-update/pip-packages/modules/stx-nova from build tracker '/tmp/pip-req-tracker-5OQGeX' Removed build tracker '/tmp/pip-req-tracker-5OQGeX' ERROR: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ Exception information: Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 352, in run resolver.resolve(requirement_set) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 131, in resolve self._resolve_one(requirement_set, req) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 294, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 242, in _get_abstract_dist_for self.require_hashes File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 362, in prepare_linked_requirement abstract_dist.prep_for_dist(finder, self.build_isolation) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 171, in prep_for_dist self.req.prepare_metadata() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 537, in prepare_metadata self.run_egg_info() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 615, in run_egg_info command_desc='python setup.py egg_info') File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/utils/misc.py", line 776, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ + '[' 1 -ne 0 ']' + echo 'Failed pip install' Failed pip install + exit 1 Failed to update image: starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月20日 2:04 To: Dean Troyer ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX For updating an image for testing, take a look at: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Incremental_Image_Updates As Dean notes, clone the repo and cherry-pick the commit, and then do something like: time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src ${path_to_cloned_repo}/stx-nova ie: # clone stx-nova, get stx/stein.2 cd /localdisk/loadbuild/dpenney/ mkdir nova-update cd nova-update/ git clone https://github.com/starlingx-staging/stx-nova.git cd stx-nova/ git fetch https://github.com/starlingx-staging/stx-nova.git stx/stein.2 git checkout FETCH_HEAD # cherry-pick update git fetch https://review.opendev.org/openstack/nova refs/changes/69/651969/13 && git cherry-pick FETCH_HEAD # Fix up conflicts, etc # Build updated image, from 20190715T233000Z build as base, # specifying cloned/modified repo time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src /localdisk/loadbuild/dpenney/nova-update/stx-nova \ --user dpenney This produces an updated image in the local registry: Updated image: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 If you also set a registry with --registry and use the --push option, the command will push the updated image to that registry. This also produces an image record file: $ cat ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 Which you can pass as an argument to build-helm-charts.sh when building your application tarball for testing: build-helm-charts.sh \ --image-record http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/docker-images/images-centos-stable-versioned.lst \ --image-record ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst \ --label centos-stable-versioned If you look at the yaml file in the tarball, you can see that it now references the updated image: $ tar xzf ${MY_WORKSPACE}/std/build-helm/stx/stx-openstack-1.0-17-centos-stable-versioned.tgz -O ./stx-openstack.yaml | grep stx-nova: nova_api: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_cell_setup: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ironic: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ssh: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_conductor: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_consoleauth: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_db_sync: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_novncproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_placement: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_scheduler: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy_assets: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, July 19, 2019 1:08 PM To: starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX On Fri, Jul 19, 2019 at 10:57 AM Yong Hu wrote: > For LP[0], there is a patch (https://review.opendev.org/#/c/651969/) > in Nova upstream, what's the method/process for StarlingX to > cherry-pick it for testing a LP reported in StarlingX? > > [0]:https://bugs.launchpad.net/starlingx/+bug/1820882 At a high level you would clone the stx-nova repo stx/stein.2 branch [0] and cherry-pick/backport the commit you want to test, and rebuild the Nova docker image and test that. I do have some Zuul jobs in starlingx/tis-repo that will pull that Nova branch and run the unit, functional and pep8 tox jobs to test in OpenStack CI. Given that the stx/stein branches also carry the NUMA live migration patches you would probably want to do whatever other live migration testing would be done upstream to validate this in the StarlingX context. dt [0] https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Tue Jul 23 06:09:27 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 23 Jul 2019 06:09:27 +0000 Subject: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC152D75E@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FC152D0F3@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C7530AF17A@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC152D3B5@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C7530AF38E@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC152D75E@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C7530AF3B3@SHSMSX104.ccr.corp.intel.com> Hi Don, Thanks for your help! Now it works as below! Not sure why it did not work yesterday. ~/starlingx/cgcs-root/build-tools/build-docker-images$ bash update-stx-image.sh --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 --module-src ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova --user zhipengl ......... Successfully built nova Installing collected packages: nova Found existing installation: nova 19.0.1.dev116 Uninstalling nova-19.0.1.dev116: Created temporary directory: /tmp/pip-uninstall-KrVTRI Removing file or directory /var/lib/openstack/bin/nova-api Removing file or directory /var/lib/openstack/bin/nova-api-metadata Removing file or directory /var/lib/openstack/bin/nova-api-os-compute Removing file or directory /var/lib/openstack/bin/nova-api-wsgi Removing file or directory /var/lib/openstack/bin/nova-cells Removing file or directory /var/lib/openstack/bin/nova-compute Removing file or directory /var/lib/openstack/bin/nova-conductor Removing file or directory /var/lib/openstack/bin/nova-console Removing file or directory /var/lib/openstack/bin/nova-consoleauth Removing file or directory /var/lib/openstack/bin/nova-dhcpbridge Removing file or directory /var/lib/openstack/bin/nova-manage Removing file or directory /var/lib/openstack/bin/nova-metadata-wsgi Removing file or directory /var/lib/openstack/bin/nova-network Removing file or directory /var/lib/openstack/bin/nova-novncproxy Removing file or directory /var/lib/openstack/bin/nova-placement-api Removing file or directory /var/lib/openstack/bin/nova-policy Removing file or directory /var/lib/openstack/bin/nova-rootwrap Removing file or directory /var/lib/openstack/bin/nova-rootwrap-daemon Removing file or directory /var/lib/openstack/bin/nova-scheduler Removing file or directory /var/lib/openstack/bin/nova-serialproxy Removing file or directory /var/lib/openstack/bin/nova-spicehtml5proxy Removing file or directory /var/lib/openstack/bin/nova-status Removing file or directory /var/lib/openstack/bin/nova-xvpvncproxy Created temporary directory: /var/lib/openstack/etc/~ova Removing file or directory /var/lib/openstack/etc/nova/ Created temporary directory: /var/lib/openstack/lib/python2.7/site-packages/~ova-19.0.1.dev116.dist-info Removing file or directory /var/lib/openstack/lib/python2.7/site-packages/nova-19.0.1.dev116.dist-info/ Created temporary directory: /var/lib/openstack/lib/python2.7/site-packages/~ova Removing file or directory /var/lib/openstack/lib/python2.7/site-packages/nova/ Successfully uninstalled nova-19.0.1.dev116 Successfully installed nova-19.0.1.dev117 Cleaning up... Removed build tracker '/tmp/pip-req-tracker-OryF4V' + '[' 0 -ne 0 ']' + run_customization_script + '[' -x /image-update/customize.sh ']' + exit 0 sha256:89c9ae9e438412f08f5ae0b85e562e657cf78542e763c595c52083bc574f9b31 Updated image: zhipengl/stx-nova:master-centos-stable-20190715T233000Z.1 Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月23日 12:06 To: Liu, ZhipengS ; Dean Troyer ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX The extra text in the output of your --version vs mine is concerning, and I'm thinking that's getting picked up by the script when the hardcode_python_module_version function runs and causing some corruption. It appears to be INFO logs, which don't get printed to stdout by default on our systems. The function is expecting the only thing on stdout when running the --version command is the version itself. Is there some python config file on your host that's setting the default logging level to logging.INFO or logging.DEBUG maybe, instead of logging.WARN? -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Monday, July 22, 2019 11:22 PM To: Penney, Don; Dean Troyer; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Hi Don, ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova$ python ./setup.py --version /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) Installed /home/wrsroot/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova/.eggs/pbr-5.4.1-py2.7.egg [pbr] Generating ChangeLog 19.0.1.dev117 Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月22日 21:59 To: Liu, ZhipengS ; Dean Troyer ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Hi Zhipeng, Can you verify that you can run the following in your cloned repo? [/localdisk/loadbuild/dpenney/nova-update/stx-nova]$ python ./setup.py --version 19.0.1.dev116 -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Monday, July 22, 2019 3:15 AM To: Penney, Don; Dean Troyer; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX Hi Don, I failed to update image. Could you give me some help, thanks! ~/starlingx/cgcs-root/build-tools/build-docker-images$ bash update-stx-image.sh \ > --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ > --module-src > ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nov > a --user zhipengl ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: wget http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar --2019-07-22 02:57:20-- http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar Resolving mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)... 135.84.104.40 Connecting to mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)|135.84.104.40|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 63354880 (60M) [application/octet-stream] Saving to: 'stx-centos-stable-wheels.tar' stx-centos-stable-wheels.tar 100%[===========================================================================>] 60.42M 7.41MB/s in 14s 2019-07-22 02:57:34 (4.42 MB/s) - 'stx-centos-stable-wheels.tar' saved [63354880/63354880] ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/modules/stx-nova ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' warnings.warn(msg) sed: -e expression #1, char 20: unterminated `s' command ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images Running: docker image pull starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 master-centos-stable-20190715T233000Z.0: Pulling from starlingx/stx-nova 5ad559c5ae16: Already exists 3d7ee0b84e39: Pull complete e1047bfd73cf: Pull complete 16f151ad544f: Pull complete 844126c15d7e: Pull complete 869460797821: Pull complete 0c319da8f09d: Pull complete 2e884702ad07: Pull complete Digest: sha256:4032358f3ab208e76c2737854fdcf4f7bd582ef8f91e150193ded8feb662fdbe Status: Downloaded newer image for starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 + bash -x /image-update/internal-update-stx-image.sh + UPDATES_DIR=/image-update + PIP_PACKAGES_DIR=/image-update/pip-packages + DIST_PACKAGES_DIR=/image-update/dist-packages + CUSTOMIZATION_SCRIPT=/image-update/customize.sh ++ source /etc/os-release +++ NAME='CentOS Linux' +++ VERSION='7 (Core)' +++ ID=centos +++ ID_LIKE='rhel fedora' +++ VERSION_ID=7 +++ PRETTY_NAME='CentOS Linux 7 (Core)' +++ ANSI_COLOR='0;31' +++ CPE_NAME=cpe:/o:centos:centos:7 +++ HOME_URL=https://www.centos.org/ +++ BUG_REPORT_URL=https://bugs.centos.org/ +++ CENTOS_MANTISBT_PROJECT=CentOS-7 +++ CENTOS_MANTISBT_PROJECT_VERSION=7 +++ REDHAT_SUPPORT_PRODUCT=centos +++ REDHAT_SUPPORT_PRODUCT_VERSION=7 ++ echo CentOS Linux + OS_NAME='CentOS Linux' ++ getopt -o h -l help: -- + OPTS=' --' + '[' 0 -ne 0 ']' + eval set -- ' --' ++ set -- -- + true + case $1 in + shift + break + install_dist_packages + local -i file_count=0 ++ find /image-update/dist-packages -type f wc -l + file_count=0 + '[' 0 -eq 0 ']' + return 0 + install_pip_packages + local modules + local wheels ++ find /image-update/pip-packages/modules/stx-nova -maxdepth 0 -type d + modules=/image-update/pip-packages/modules/stx-nova ++ find /image-update/pip-packages/wheels/ -type f -name '*.whl' + wheels= + '[' -z /image-update/pip-packages/modules/stx-nova -a -z '' ']' + pip install -vvv --no-deps --no-index --pre --no-cache-dir + --only-binary :all: --no-compile --force-reinstall + /image-update/pip-packages/modules/stx-nova DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. Ignoring indexes: https://pypi.org/simple Created temporary directory: /tmp/pip-ephem-wheel-cache-Pld6sg Created temporary directory: /tmp/pip-req-tracker-5OQGeX Created requirements tracker '/tmp/pip-req-tracker-5OQGeX' Created temporary directory: /tmp/pip-install-8tWna2 Processing /image-update/pip-packages/modules/stx-nova Created temporary directory: /tmp/pip-req-build-EngJ2Q Added file:///image-update/pip-packages/modules/stx-nova to build tracker '/tmp/pip-req-tracker-5OQGeX' Running setup.py (path:/tmp/pip-req-build-EngJ2Q/setup.py) egg_info for package from file:///image-update/pip-packages/modules/stx-nova Running command python setup.py egg_info ERROR:root:Error parsing Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pbr/core.py", line 96, in pbr attrs = util.cfg_to_args(path, dist.script_args) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/util.py", line 256, in cfg_to_args pbr.hooks.setup_hook(config) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/__init__.py", line 25, in setup_hook metadata_config.run() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run self.hook() File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook self.config['name'], self.config.get('version', None)) File "/var/lib/openstack/lib/python2.7/site-packages/pbr/packaging.py", line 849, in get_version name=package_name)) Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. error in setup command: Error parsing /tmp/pip-req-build-EngJ2Q/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. Cleaning up... Removing source in /tmp/pip-req-build-EngJ2Q Removed file:///image-update/pip-packages/modules/stx-nova from build tracker '/tmp/pip-req-tracker-5OQGeX' Removed build tracker '/tmp/pip-req-tracker-5OQGeX' ERROR: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ Exception information: Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 352, in run resolver.resolve(requirement_set) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 131, in resolve self._resolve_one(requirement_set, req) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 294, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 242, in _get_abstract_dist_for self.require_hashes File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 362, in prepare_linked_requirement abstract_dist.prep_for_dist(finder, self.build_isolation) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 171, in prep_for_dist self.req.prepare_metadata() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 537, in prepare_metadata self.run_egg_info() File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 615, in run_egg_info command_desc='python setup.py egg_info') File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/utils/misc.py", line 776, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ + '[' 1 -ne 0 ']' + echo 'Failed pip install' Failed pip install + exit 1 Failed to update image: starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年7月20日 2:04 To: Dean Troyer ; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX For updating an image for testing, take a look at: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Incremental_Image_Updates As Dean notes, clone the repo and cherry-pick the commit, and then do something like: time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src ${path_to_cloned_repo}/stx-nova ie: # clone stx-nova, get stx/stein.2 cd /localdisk/loadbuild/dpenney/ mkdir nova-update cd nova-update/ git clone https://github.com/starlingx-staging/stx-nova.git cd stx-nova/ git fetch https://github.com/starlingx-staging/stx-nova.git stx/stein.2 git checkout FETCH_HEAD # cherry-pick update git fetch https://review.opendev.org/openstack/nova refs/changes/69/651969/13 && git cherry-pick FETCH_HEAD # Fix up conflicts, etc # Build updated image, from 20190715T233000Z build as base, # specifying cloned/modified repo time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ --module-src /localdisk/loadbuild/dpenney/nova-update/stx-nova \ --user dpenney This produces an updated image in the local registry: Updated image: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 If you also set a registry with --registry and use the --push option, the command will push the updated image to that registry. This also produces an image record file: $ cat ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 Which you can pass as an argument to build-helm-charts.sh when building your application tarball for testing: build-helm-charts.sh \ --image-record http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/docker-images/images-centos-stable-versioned.lst \ --image-record ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst \ --label centos-stable-versioned If you look at the yaml file in the tarball, you can see that it now references the updated image: $ tar xzf ${MY_WORKSPACE}/std/build-helm/stx/stx-openstack-1.0-17-centos-stable-versioned.tgz -O ./stx-openstack.yaml | grep stx-nova: nova_api: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_cell_setup: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ironic: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_compute_ssh: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_conductor: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_consoleauth: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_db_sync: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_novncproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_placement: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_scheduler: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 nova_spiceproxy_assets: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, July 19, 2019 1:08 PM To: starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX On Fri, Jul 19, 2019 at 10:57 AM Yong Hu wrote: > For LP[0], there is a patch (https://review.opendev.org/#/c/651969/) > in Nova upstream, what's the method/process for StarlingX to > cherry-pick it for testing a LP reported in StarlingX? > > [0]:https://bugs.launchpad.net/starlingx/+bug/1820882 At a high level you would clone the stx-nova repo stx/stein.2 branch [0] and cherry-pick/backport the commit you want to test, and rebuild the Nova docker image and test that. I do have some Zuul jobs in starlingx/tis-repo that will pull that Nova branch and run the unit, functional and pep8 tox jobs to test in OpenStack CI. Given that the stx/stein branches also carry the NUMA live migration patches you would probably want to do whatever other live migration testing would be done upstream to validate this in the StarlingX context. dt [0] https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhu.boxiang at 99cloud.net Tue Jul 23 08:22:52 2019 From: zhu.boxiang at 99cloud.net (=?utf-8?B?5pyx5Y2a56Wl?=) Date: Tue, 23 Jul 2019 16:22:52 +0800 Subject: [Starlingx-discuss] =?utf-8?q?About_sanity_test_block_issue_-_Cre?= =?utf-8?q?ate_instance_from_Image_or_from_Volume_fails?= In-Reply-To: <93814834B4855241994F290E959305C7530AF378@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C7530AF337@SHSMSX104.ccr.corp.intel.com> <93814834B4855241994F290E959305C7530AF378@SHSMSX104.ccr.corp.intel.com> Message-ID: <1DE11A9F-7D1C-4228-807F-52DC0D242CD4@99cloud.net> An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jul 23 09:19:58 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 23 Jul 2019 09:19:58 +0000 Subject: [Starlingx-discuss] About sanity test block issue - Create instance from Image or from Volume fails In-Reply-To: <1DE11A9F-7D1C-4228-807F-52DC0D242CD4@99cloud.net> References: <93814834B4855241994F290E959305C7530AF337@SHSMSX104.ccr.corp.intel.com> <93814834B4855241994F290E959305C7530AF378@SHSMSX104.ccr.corp.intel.com> <1DE11A9F-7D1C-4228-807F-52DC0D242CD4@99cloud.net> Message-ID: <93814834B4855241994F290E959305C7530AF424@SHSMSX104.ccr.corp.intel.com> Hi all, After discuss with boxiang, it might not fix the issue if we just cherry pick upstream patch 640271 So I propose to revert below patch to unblock sanity test first. https://review.opendev.org/#/c/661512/ If we want to merge this change, we need to figure out if nova part or other place need to be changed as well. Thanks! Zhipeng From: 朱博祥 [mailto:zhu.boxiang at 99cloud.net] Sent: 2019年7月23日 16:23 To: Liu, ZhipengS Cc: Perez Ibarra, Maria G ; starlingx-discuss ; 张鲲鹏 ; 黄舒泉 Subject: Re: [Starlingx-discuss] About sanity test block issue - Create instance from Image or from Volume fails @zhipeng First of, the default value of force_raw_images in nova project is True. In stx 1.0, we set it to False and then force to convert the format to raw into rbd with some starlingx codes(no nova upstream). Now in stx 2.0, we use nova upstream so we do not have some starlingx codes to force convert the format to raw into rbd. So we must set the value of force_raw_images to True. If we don't do that, vms will fail to boot when using qcow2 images. I know that now the stein branch both nova upsteam and stx do not include the patch[0] . But I think it is not a problem because whether we have this patch, when we use remote storage(ceph) as nova backend, we should set force_raw_images to True. [0] https://review.opendev.org/#/c/640271/ On 7/23/2019 11:16,Liu, ZhipengS wrote: Sorry for pasting a wrong LP link in my last mail, correct it! From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: 2019年7月23日 10:32 To: Perez Ibarra, Maria G >; starlingx-discuss at lists.starlingx.io Cc: zhu.boxiang at 99cloud.net Subject: [Starlingx-discuss] About sanity test block issue - Create instance from Image or from Volume fails Hi Boxiang and all, https://bugs.launchpad.net/starlingx/+bug/1837241 This issue should be a regression issue. Pls see above LP link for more info. In test log, it shows below error. fault message: u'Build of instance ab3dfe11-3ac3-4a26-a1ce-3bff1e2ca78b aborted: Image 69e1593f-4d03-4cd5-9779-8ffc1d227da0 is unacceptable: Converted to raw, but format is now qcow2' In 20190719T013000Z/outputs/CHANGELOG.txt I saw we merged https://review.opendev.org/#/c/661512/ ================================================================== Use true for force_raw_images when using ceph image backend We need this patch for two reasons: Nova of starlingx has not this patch[0]. We use remote storage(ceph) as nova backend. If we set force_raw_image to False and use qcow2 format image to boot vms, the vms will fail to boot. Nova of starlingx will have this patch[0]. If we still use False for force_raw_images, the nova-compute service will refuse to start. So that, we must set this force_raw_images to True at all. [0] https://review.opendev.org/#/c/640271/ ================================================================== However, I double checked our current stein.2 nova, [0] is not included. This is the problem! I have 2 proposals below: 1) Revert 640271 to unblock sanity test first 2) Cherry pick [0] to nova stein.2 Any comment? Thanks! Zhipeng From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: 2019年7月23日 7:49 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers – Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Tue Jul 23 10:39:41 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Tue, 23 Jul 2019 18:39:41 +0800 Subject: [Starlingx-discuss] [ STX R1.0] How to reset system to default and re-run "sudo config_controller" ? Message-ID: Dear all, Environtment: STX R1.0 (2018/10) How to reset system to default and re-run "sudo config_controller" ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jul 23 12:58:24 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 23 Jul 2019 12:58:24 +0000 Subject: [Starlingx-discuss] Community Call (July 24, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AB1732@ALA-MBD.corp.ad.wrs.com> Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - sanity - any red sanities since last Community meeting? - reviews in need of attention - defect trend / gating launchpads - bitergia update: see https://etherpad.openstack.org/p/stx-bitergia - first contact update - mailing list responsiveness - see https://etherpad.openstack.org/p/stx-first-contact (at the bottom) - open actions from previous meetings Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190724T1400 From cindy.xie at intel.com Tue Jul 23 13:44:36 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 23 Jul 2019 13:44:36 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/24 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE765C@SHSMSX104.ccr.corp.intel.com> All, Please see agenda for 7/24 call below: Agenda for 7/23 meetings: - kernel minor verion upgrade status update (Shuai) - stx 2.0 bug triage/review (Cindy) - Opens (all) Please feel free to add more topics in case I missed. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 24, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Jul 23 13:46:42 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 23 Jul 2019 09:46:42 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 189 - Failure! In-Reply-To: <1829751717.17.1563838220234.JavaMail.javamailuser@localhost> References: <1829751717.17.1563838220234.JavaMail.javamailuser@localhost> Message-ID: CENGN was having issues connecting to opendev last night. Connectivity is back to normal this morning. I will relaunch the build, with containers. Scott On 2019-07-22 7:30 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_master > Build #: 189 > Status: Failure > Timestamp: 20190722T233000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190722T233000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Jul 23 14:00:10 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 23 Jul 2019 14:00:10 +0000 Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE76BF@SHSMSX104.ccr.corp.intel.com> Hi, Bin I guess you're located in China and have firewall blocking some docker image, right? You may have to setup your local registry in your lab. Thx. - cindy From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Tuesday, July 23, 2019 8:53 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Dear experts, I have been trying to install stx milestone3 over virtualbox with standard modes, I managed to install 2 controller nodes and 1 worker node, and provisioned them according to the instructions on wiki. Then I uploaded openstack helm charts then apply it, then it failed to accomplish that operation. As I investigate the root cause, I found out that the worker node is not in ready status: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl get nodes NAME STATUS ROLES AGE VERSION compute-0 NotReady 3h32m v1.13.5 controller-0 Ready master 14d v1.13.5 controller-1 Ready master 13d v1.13.5 The root cause is that worker node is lacking of access to external network directly, hence unable to pull docker image via the proxy: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl -n kube-system describe pods kube-proxy-pftk8 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 38s (x462 over 3h36m) kubelet, compute-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) compute-0:~# docker pull k8s.gcr.io/pause:3.1 Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) The traceroute to the docker proxy : compute-0:~# cat /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://128.224.230.5:9090" Environment="HTTPS_PROXY=http://128.224.230.5:9090" Environment="NO_PROXY=localhost,127.0.0.1,registry.local,192.168.204.2,192.168.204.3,10.0.2.25,10.0.2.26,192.168.204.4,10.0.2.27" compute-0:~# traceroute 128.224.230.5 traceroute to 128.224.230.5 (128.224.230.5), 30 hops max, 60 byte packets 1 controller-0 (192.168.204.3) 0.200 ms 0.256 ms 0.208 ms 2 * * * 3 * * * Can anybody help explain how a worker node without access to external network could pull docker images? How can I workaround this issue? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Tuesday, July 23, 2019 7:49 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jul 23 14:05:11 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 23 Jul 2019 14:05:11 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 In-Reply-To: References: Message-ID: <93814834B4855241994F290E959305C7530AF4DC@SHSMSX104.ccr.corp.intel.com> Hi Maria G and Cristopher Lemus, Could you help to verify this block issue with below EB, which Revert "Use true for force_raw_images when using ceph image backend" https://review.opendev.org/#/c/672287/ You can got EB from below place. http://dcp-dev.intel.com/pub/starlingx/stx-eng-build/163/outputs/ Thanks! Zhipeng From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: 2019年7月23日 7:49 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Tue Jul 23 14:10:56 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Tue, 23 Jul 2019 14:10:56 +0000 Subject: [Starlingx-discuss] Building openstack-helm under OpenSUSE OBS after option serve has been removed from helm fails In-Reply-To: <5f470fa9-3de3-eefe-dc2e-e0641dbb7f0b@fridu.net> References: <5f470fa9-3de3-eefe-dc2e-e0641dbb7f0b@fridu.net> Message-ID: In Centos, openstack-helm is installing this version in mock: helm-2.13.1-0.tis.2.x86_64 1560862674 37163874 f8bf2d311350ac6b6b4c3059b16cf54d installed However the spec file we use does not actually specify an upper version value for helm, so presumably if we were to update our rpm repo and pick up a higher version of helm we would likely hit the same issue. https://opendev.org/starlingx/upstream/src/branch/master/openstack/openstack-helm/centos/openstack-helm.spec#L43 For now, I'd suggest setting a max version of helm for in your spec file to 2.13 for that BuildRequires Al -----Original Message----- From: Dominig ar Foll (Intel Open Source) [mailto:dominig.arfoll at fridu.net] Sent: Monday, July 22, 2019 12:13 PM To: Starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Building openstack-helm under OpenSUSE OBS after option serve has been removed from helm fails Hello, The building process of the package  openstack-helm under the OpenSUSE OBS is expecting the option 'serve' of the helm command to be available. The option 'serve' has been removed from helm (commit  b8eb479a4f3b1934770c75ed9013dfe263d8940f from April 2018) and that change has now surfaced in OpenSUSE packages. Would some be able to indicate a viable alternative to the mode "helm serve --local-repo"  -- Dominig ar Foll Senior Software Architect Intel Open Source Technology Centre _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Tue Jul 23 15:09:59 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 23 Jul 2019 10:09:59 -0500 Subject: [Starlingx-discuss] How does STX detect a failed vm? Message-ID: Hi team We are testing if STX can detect a failed virtual machine. The way we are making the VM fail is by : $ echo 1 > /proc/sys/kernel/sysrq $ echo c > /proc/sysrq-trigger This will make use of the SysRq kernel mechanism to literally crush the system [0]. After we execute this command by ssh the connection lost and when we check the log of the VM with: openstack console log show We can see the kernel panic. However, when we check nova and horizon there is no log that mentions the VM has failed; actually, the horizon dashboard shows the VM as running. The qemu process is still running but the VM hangs. Do you know a way to detect this kind of failure in a VM? What way do you recommend to make a VM fail? killing the process of the VM running or turning it off is not the kind of test we are looking for. Also, this could be a valid scenario for users/costumers, where many VMs had a kernel panic but horizon does not show anything unusual. Thanks for all the help Regards Victor R [0] http://ngelinux.com/what-is-proc-sysrq-trigger-in-linux-and-how-to-use-sysrq-kernel-feature/ From marcel at schaible-consulting.de Tue Jul 23 15:19:03 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Tue, 23 Jul 2019 17:19:03 +0200 (CEST) Subject: [Starlingx-discuss] Help: How to update StarlingX? Message-ID: <772516424.144004.1563895143503@communicator.strato.com> Hi, at the moment are running our system with a StarlingX version from around mid June. What is the recommended way to update our installation like e.g adding the StarlingX repository to our installation? In the past we where reinstalling everything, which is a real pain. Thanks Marcel From Dariush.Eslimi at windriver.com Tue Jul 23 15:54:00 2019 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Tue, 23 Jul 2019 15:54:00 +0000 Subject: [Starlingx-discuss] Help: How to update StarlingX? In-Reply-To: <772516424.144004.1563895143503@communicator.strato.com> References: <772516424.144004.1563895143503@communicator.strato.com> Message-ID: Hi, There is no way to update your installation other than reinstall, update feature will be part of stx 3 and the from supported build is not defined yet as it may need changes to the stx 2. Thanks, Dariush -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: July-23-19 11:19 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Help: How to update StarlingX? Hi, at the moment are running our system with a StarlingX version from around mid June. What is the recommended way to update our installation like e.g adding the StarlingX repository to our installation? In the past we where reinstalling everything, which is a real pain. Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dominig.arfoll at fridu.net Tue Jul 23 17:36:21 2019 From: dominig.arfoll at fridu.net (Dominig ar Foll (Intel Open Source)) Date: Tue, 23 Jul 2019 19:36:21 +0200 Subject: [Starlingx-discuss] Building openstack-helm under OpenSUSE OBS after option serve has been removed from helm fails (work around) In-Reply-To: References: <5f470fa9-3de3-eefe-dc2e-e0641dbb7f0b@fridu.net> Message-ID: On 23/07/2019 16:10, Bailey, Henry Albert (Al) wrote: > In Centos, openstack-helm is installing this version in mock: > helm-2.13.1-0.tis.2.x86_64 1560862674 37163874 f8bf2d311350ac6b6b4c3059b16cf54d installed It's coming. Just a question of time. > However the spec file we use does not actually specify an upper version value for helm, so presumably if we were to update our rpm repo and pick up a higher version of helm we would likely hit the same issue. > https://opendev.org/starlingx/upstream/src/branch/master/openstack/openstack-helm/centos/openstack-helm.spec#L43 I am afraid that a solution not using "serve" will be required. > For now, I'd suggest setting a max version of helm for in your spec file to 2.13 for that BuildRequires As a work around, I have built an a version 2.13 of helm for 15.0 and 15.1 If the multiOS team wants it, they have to update their meta file to reflect the following repos referencing.                       x86_64                         x86_64   -- Dominig ar Foll Senior Software Architect Intel Open Source Technology Centre From ada.cabrales at intel.com Tue Jul 23 23:02:45 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 23 Jul 2019 23:02:45 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 07/23/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CE6E9C7@FMSMSX112.amr.corp.intel.com> Agenda for 07/23 Attendees: Cristopher, Yong, Elio, Dominig, Fer, JC, Jose, JP, Maria P, Numan, Yang, Ada, Richo 1. Sanity status - Cristopher Create instance from Image or from Volume fails - https://bugs.launchpad.net/starlingx/+bug/1837241 The fix has been merged, it will be available on tonight's build, an engineering build will be tested on virtual. Platform integ state stays in 'uploaded' status - should be fixed in next ISO. On intel side, when a suite fails on a stage, we re-run it. If it runs with manual execution, we don't report the issue. However, we are seeing intermittent failures that looks like they are gone with second execution. Let's report all these. These could be a systemic failure. There's a field in the launchpad template for stating if this is an intermittent issue or not. Use it. Numan: please send the list of bugs found in sanity by WR. Info of intel tests sent - last Wednesday 2. Regression testing status stx.2.0 - Elio, Numan Total / Pass / Fail / blocked / Obsolete - 493 / 287 / 11 / 37 / 21 - Pass rate: 96.3% Regression due date is next week (Aug 2nd) ~80% of manual execution done, including the storage tests. Let's mark the ones with low risk and already passed as 2nd priority. 2nd run of automated regression is about 5% done. Failures - we don't have a way to verify encrypted devices. Talk with Chris - Numan will also talk with him. SRIOV - PCI pass through - work with Richo for getting help. Make sure the hardware is enabled for SRIOV - this is working for WR. Still investigating about NUMA tests IPv6 will be run for basic functionality. A lab will be also set in WR for testing. 75 tests remaining for execution. By end of this week we will know if help is required or not. Automated regression report sent by email - Ada to consolidate. Looks better than last run - ~81% of pass rate. Ada to build a full report for tests/failures/manual/automated 3. Pytest framework committed - Ada Code is merged. Documentation in the wiki is missing. Readme is already in rst. Ada to talk with Abraham in order to get direction - Jose states the wiki can be updated automatically - let's find out with Abraham. Another way of doing it will be putting the link to the documentation into the readme file. Or add the info into the documentation and that's the one to be copied to the wiki. For bugs / changes submit launchpads using the stx.testauto. 4. Robot framework - plan for having it outside? - Jose Code is being cleaned-up. Documentation is also being completed. Plan is to have the code by first week of August. Process will be similar to what we did for the pytest FW. 5. Pending actions: - Inclusion of report of automated tests run into the regression report - Dashboard for testing results - Shared space for files (google drive might not the best option) - Project for consolidating the sanity run on both sides (WR and Intel) 6. Opens Yong - high launchpad - nova placement patch has been sent. Sent the info by email to schedule it. This requires a specific hardware (specific NICs) - Richo to review and schedule running it based on the hardware availability. Numan to JC - re-send the email with the sanity tests cases. Regards Ada From maria.g.perez.ibarra at intel.com Tue Jul 23 23:06:02 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 23 Jul 2019 23:06:02 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 7/23/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190718T013000Z" from (link) ---------------------------------------------------------------------- Overall Results: Total = 493 Pass = 287 Fail = 11 Blocked = 37 Not Run = 137 Obsolete = 21 Total executed = 335 Pass Rate = 96.30% Formula used : Pass Rate = pass * 100 / (pass + fail) ---------------------------------------------------------------------- Results per Domain: Regression - AIO-SX 25 PASS | 1 Obsolete Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM 3 PASS Regression - HA 9 PASS | 1 FAIL Regression - Heat 12 PASS | 1 OBSOLETE Regression - Horizon 4 PASS Regression - Install and Config 5 PASS Regression - Maintenance 7 PASS | 1 FAIL Regression - Networking 97 PASS | 4 FAIL | 19 BLOCKED | 15 OBSOLETE Regression - Nova 10 PASS | 2 FAIL Regression - Security 34 PASS | 1 FAIL | 6 BLOCKED | 1 OBSOLETE Regression - Storage 8 PASS | 2 OBSOLETE Regression - Inventory 29 PASS | 1 FAIL System Test 20 PASS | 1 FAIL | 12 BLOCKED | 1 OBSOLETE Regression - new features 9 PASS --------------------------------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 System account doesn't block after invalid login attempts https://bugs.launchpad.net/starlingx/+bug/1814345 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 virsh only listing one volume, even though there was an additional volume attached after instantiation https://bugs.launchpad.net/starlingx/+bug/1834194 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 Instance created with a flat network spawns in error state https://bugs.launchpad.net/starlingx/+bug/1835965 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 VM fail to live migrate after evacuation https://bugs.launchpad.net/starlingx/+bug/1836402 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 CirrOS VM login takes too much time, and throw different log errors https://bugs.launchpad.net/starlingx/+bug/1835575 Live Migration Error: Failed to live migrate instance to host "AUTO_SCHEDULE". https://bugs.launchpad.net/starlingx/+bug/1837256 Storage group type conversion getting failed https://bugs.launchpad.net/starlingx/+bug/1837464 stx-openstack in apply-failed after lock/unlock standby controller https://bugs.launchpad.net/starlingx/+bug/1837581 Total Bugs: 20 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?pli=1#gid=322455033 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Wed Jul 24 00:08:22 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 24 Jul 2019 00:08:22 +0000 Subject: [Starlingx-discuss] [ STX R1.0] How to reset system to default and re-run "sudo config_controller" ? In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C76608B69A2@SHSMSX105.ccr.corp.intel.com> Hi Chen, STX 1.0 doesn’t support re-run “sudo config_controller”, you have to re-install ISO if there is failure when run config_controller. Best Regards Shuicheng From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Tuesday, July 23, 2019 6:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [ STX R1.0] How to reset system to default and re-run "sudo config_controller" ? Dear all, Environtment: STX R1.0 (2018/10) How to reset system to default and re-run "sudo config_controller" ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Yang at windriver.com Wed Jul 24 00:20:46 2019 From: Bin.Yang at windriver.com (Yang, Bin) Date: Wed, 24 Jul 2019 00:20:46 +0000 Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FE76BF@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FE76BF@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Cindy, Thanks for the response. I do locate in China, and I have setup a proxy to overcome the connectivity issue. That was verified to work by installing controllers, and by executing docker pull image on controller nodes. The issue I am experiencing is that 'a worker node' have no access to oam network but trying to pull docker image while I am trying to deploy openstack helm using 'system application-apply stx-openstack' commands. Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, July 23, 2019 10:00 PM To: Yang, Bin; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi, Bin I guess you're located in China and have firewall blocking some docker image, right? You may have to setup your local registry in your lab. Thx. - cindy From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Tuesday, July 23, 2019 8:53 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Dear experts, I have been trying to install stx milestone3 over virtualbox with standard modes, I managed to install 2 controller nodes and 1 worker node, and provisioned them according to the instructions on wiki. Then I uploaded openstack helm charts then apply it, then it failed to accomplish that operation. As I investigate the root cause, I found out that the worker node is not in ready status: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl get nodes NAME STATUS ROLES AGE VERSION compute-0 NotReady 3h32m v1.13.5 controller-0 Ready master 14d v1.13.5 controller-1 Ready master 13d v1.13.5 The root cause is that worker node is lacking of access to external network directly, hence unable to pull docker image via the proxy: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl -n kube-system describe pods kube-proxy-pftk8 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 38s (x462 over 3h36m) kubelet, compute-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) compute-0:~# docker pull k8s.gcr.io/pause:3.1 Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) The traceroute to the docker proxy : compute-0:~# cat /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://128.224.230.5:9090" Environment="HTTPS_PROXY=http://128.224.230.5:9090" Environment="NO_PROXY=localhost,127.0.0.1,registry.local,192.168.204.2,192.168.204.3,10.0.2.25,10.0.2.26,192.168.204.4,10.0.2.27" compute-0:~# traceroute 128.224.230.5 traceroute to 128.224.230.5 (128.224.230.5), 30 hops max, 60 byte packets 1 controller-0 (192.168.204.3) 0.200 ms 0.256 ms 0.208 ms 2 * * * 3 * * * Can anybody help explain how a worker node without access to external network could pull docker images? How can I workaround this issue? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Tuesday, July 23, 2019 7:49 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Jul 24 01:16:12 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 24 Jul 2019 01:16:12 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190723 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-23(link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] 2 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] 1 TCs FAIL Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] 1 TCs FAIL Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- We are working on collecting the logs to report the corresponding bugs. Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Wed Jul 24 01:25:55 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Wed, 24 Jul 2019 01:25:55 +0000 Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35FE76BF@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Bin, Worker nodes can access external network through controller nodes. In our deployment, we use a local registry which is accessible via OAM. Worker nodes do can pull images from the local registry. So I suggest you check 1. if your worker nodes can access external network through oam 2. check the docker configuration "/etc/docker/daemon.json" on your worker nodes. Thanks. Yi From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Wednesday, July 24, 2019 8:21 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Cindy, Thanks for the response. I do locate in China, and I have setup a proxy to overcome the connectivity issue. That was verified to work by installing controllers, and by executing docker pull image on controller nodes. The issue I am experiencing is that 'a worker node' have no access to oam network but trying to pull docker image while I am trying to deploy openstack helm using 'system application-apply stx-openstack' commands. Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, July 23, 2019 10:00 PM To: Yang, Bin; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi, Bin I guess you're located in China and have firewall blocking some docker image, right? You may have to setup your local registry in your lab. Thx. - cindy From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Tuesday, July 23, 2019 8:53 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Dear experts, I have been trying to install stx milestone3 over virtualbox with standard modes, I managed to install 2 controller nodes and 1 worker node, and provisioned them according to the instructions on wiki. Then I uploaded openstack helm charts then apply it, then it failed to accomplish that operation. As I investigate the root cause, I found out that the worker node is not in ready status: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl get nodes NAME STATUS ROLES AGE VERSION compute-0 NotReady 3h32m v1.13.5 controller-0 Ready master 14d v1.13.5 controller-1 Ready master 13d v1.13.5 The root cause is that worker node is lacking of access to external network directly, hence unable to pull docker image via the proxy: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl -n kube-system describe pods kube-proxy-pftk8 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 38s (x462 over 3h36m) kubelet, compute-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) compute-0:~# docker pull k8s.gcr.io/pause:3.1 Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) The traceroute to the docker proxy : compute-0:~# cat /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://128.224.230.5:9090" Environment="HTTPS_PROXY=http://128.224.230.5:9090" Environment="NO_PROXY=localhost,127.0.0.1,registry.local,192.168.204.2,192.168.204.3,10.0.2.25,10.0.2.26,192.168.204.4,10.0.2.27" compute-0:~# traceroute 128.224.230.5 traceroute to 128.224.230.5 (128.224.230.5), 30 hops max, 60 byte packets 1 controller-0 (192.168.204.3) 0.200 ms 0.256 ms 0.208 ms 2 * * * 3 * * * Can anybody help explain how a worker node without access to external network could pull docker images? How can I workaround this issue? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Tuesday, July 23, 2019 7:49 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Yang at windriver.com Wed Jul 24 01:46:10 2019 From: Bin.Yang at windriver.com (Yang, Bin) Date: Wed, 24 Jul 2019 01:46:10 +0000 Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35FE76BF@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Yi, Thanks for the response. I did test if worker node could access to oam network by ping the docker proxy ip, and the result turned out that worker node cannot reach to that ip. I did the same test over controllers and the controllers could reach to that ip. So I guess this is not an issue of docker settings, it might be something wrong with controllers if controllers are deemed to NAT the traffic from worker nodes to OAM network. Any suggestion on how to check that? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Wednesday, July 24, 2019 9:26 AM To: Yang, Bin; Xie, Cindy; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Bin, Worker nodes can access external network through controller nodes. In our deployment, we use a local registry which is accessible via OAM. Worker nodes do can pull images from the local registry. So I suggest you check 1. if your worker nodes can access external network through oam 2. check the docker configuration "/etc/docker/daemon.json" on your worker nodes. Thanks. Yi From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Wednesday, July 24, 2019 8:21 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Cindy, Thanks for the response. I do locate in China, and I have setup a proxy to overcome the connectivity issue. That was verified to work by installing controllers, and by executing docker pull image on controller nodes. The issue I am experiencing is that 'a worker node' have no access to oam network but trying to pull docker image while I am trying to deploy openstack helm using 'system application-apply stx-openstack' commands. Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, July 23, 2019 10:00 PM To: Yang, Bin; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi, Bin I guess you're located in China and have firewall blocking some docker image, right? You may have to setup your local registry in your lab. Thx. - cindy From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Tuesday, July 23, 2019 8:53 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Dear experts, I have been trying to install stx milestone3 over virtualbox with standard modes, I managed to install 2 controller nodes and 1 worker node, and provisioned them according to the instructions on wiki. Then I uploaded openstack helm charts then apply it, then it failed to accomplish that operation. As I investigate the root cause, I found out that the worker node is not in ready status: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl get nodes NAME STATUS ROLES AGE VERSION compute-0 NotReady 3h32m v1.13.5 controller-0 Ready master 14d v1.13.5 controller-1 Ready master 13d v1.13.5 The root cause is that worker node is lacking of access to external network directly, hence unable to pull docker image via the proxy: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl -n kube-system describe pods kube-proxy-pftk8 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 38s (x462 over 3h36m) kubelet, compute-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) compute-0:~# docker pull k8s.gcr.io/pause:3.1 Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) The traceroute to the docker proxy : compute-0:~# cat /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://128.224.230.5:9090" Environment="HTTPS_PROXY=http://128.224.230.5:9090" Environment="NO_PROXY=localhost,127.0.0.1,registry.local,192.168.204.2,192.168.204.3,10.0.2.25,10.0.2.26,192.168.204.4,10.0.2.27" compute-0:~# traceroute 128.224.230.5 traceroute to 128.224.230.5 (128.224.230.5), 30 hops max, 60 byte packets 1 controller-0 (192.168.204.3) 0.200 ms 0.256 ms 0.208 ms 2 * * * 3 * * * Can anybody help explain how a worker node without access to external network could pull docker images? How can I workaround this issue? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Tuesday, July 23, 2019 7:49 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Yang at windriver.com Wed Jul 24 01:47:17 2019 From: Bin.Yang at windriver.com (Yang, Bin) Date: Wed, 24 Jul 2019 01:47:17 +0000 Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node References: <2FD5DDB5A04D264C80D42CA35194914F35FE76BF@SHSMSX104.ccr.corp.intel.com> Message-ID: And could you kindly educate me how to setup a local docker register over oam network? The network latency results a lot of issue during my installation of stx controller nodes. Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Yang, Bin Sent: Wednesday, July 24, 2019 9:46 AM To: 'Wang, Yi C'; Xie, Cindy; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Yi, Thanks for the response. I did test if worker node could access to oam network by ping the docker proxy ip, and the result turned out that worker node cannot reach to that ip. I did the same test over controllers and the controllers could reach to that ip. So I guess this is not an issue of docker settings, it might be something wrong with controllers if controllers are deemed to NAT the traffic from worker nodes to OAM network. Any suggestion on how to check that? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Wednesday, July 24, 2019 9:26 AM To: Yang, Bin; Xie, Cindy; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Bin, Worker nodes can access external network through controller nodes. In our deployment, we use a local registry which is accessible via OAM. Worker nodes do can pull images from the local registry. So I suggest you check 1. if your worker nodes can access external network through oam 2. check the docker configuration "/etc/docker/daemon.json" on your worker nodes. Thanks. Yi From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Wednesday, July 24, 2019 8:21 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Cindy, Thanks for the response. I do locate in China, and I have setup a proxy to overcome the connectivity issue. That was verified to work by installing controllers, and by executing docker pull image on controller nodes. The issue I am experiencing is that 'a worker node' have no access to oam network but trying to pull docker image while I am trying to deploy openstack helm using 'system application-apply stx-openstack' commands. Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, July 23, 2019 10:00 PM To: Yang, Bin; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi, Bin I guess you're located in China and have firewall blocking some docker image, right? You may have to setup your local registry in your lab. Thx. - cindy From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Tuesday, July 23, 2019 8:53 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Dear experts, I have been trying to install stx milestone3 over virtualbox with standard modes, I managed to install 2 controller nodes and 1 worker node, and provisioned them according to the instructions on wiki. Then I uploaded openstack helm charts then apply it, then it failed to accomplish that operation. As I investigate the root cause, I found out that the worker node is not in ready status: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl get nodes NAME STATUS ROLES AGE VERSION compute-0 NotReady 3h32m v1.13.5 controller-0 Ready master 14d v1.13.5 controller-1 Ready master 13d v1.13.5 The root cause is that worker node is lacking of access to external network directly, hence unable to pull docker image via the proxy: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl -n kube-system describe pods kube-proxy-pftk8 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 38s (x462 over 3h36m) kubelet, compute-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) compute-0:~# docker pull k8s.gcr.io/pause:3.1 Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) The traceroute to the docker proxy : compute-0:~# cat /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://128.224.230.5:9090" Environment="HTTPS_PROXY=http://128.224.230.5:9090" Environment="NO_PROXY=localhost,127.0.0.1,registry.local,192.168.204.2,192.168.204.3,10.0.2.25,10.0.2.26,192.168.204.4,10.0.2.27" compute-0:~# traceroute 128.224.230.5 traceroute to 128.224.230.5 (128.224.230.5), 30 hops max, 60 byte packets 1 controller-0 (192.168.204.3) 0.200 ms 0.256 ms 0.208 ms 2 * * * 3 * * * Can anybody help explain how a worker node without access to external network could pull docker images? How can I workaround this issue? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Tuesday, July 23, 2019 7:49 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Wed Jul 24 02:07:32 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Wed, 24 Jul 2019 02:07:32 +0000 Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35FE76BF@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Bin, I can't ping my local registry too. I didn't check iptables rules of controller nodes. But I guess icmp was blocked by some rules. I can connect to external hosts by ssh from my worker nodes. You can refer to below link on local registry deploy. https://docs.docker.com/registry/deploying/ Thanks. Yi From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Wednesday, July 24, 2019 9:47 AM To: Wang, Yi C ; Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node And could you kindly educate me how to setup a local docker register over oam network? The network latency results a lot of issue during my installation of stx controller nodes. Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Yang, Bin Sent: Wednesday, July 24, 2019 9:46 AM To: 'Wang, Yi C'; Xie, Cindy; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Yi, Thanks for the response. I did test if worker node could access to oam network by ping the docker proxy ip, and the result turned out that worker node cannot reach to that ip. I did the same test over controllers and the controllers could reach to that ip. So I guess this is not an issue of docker settings, it might be something wrong with controllers if controllers are deemed to NAT the traffic from worker nodes to OAM network. Any suggestion on how to check that? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Wednesday, July 24, 2019 9:26 AM To: Yang, Bin; Xie, Cindy; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Bin, Worker nodes can access external network through controller nodes. In our deployment, we use a local registry which is accessible via OAM. Worker nodes do can pull images from the local registry. So I suggest you check 1. if your worker nodes can access external network through oam 2. check the docker configuration "/etc/docker/daemon.json" on your worker nodes. Thanks. Yi From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Wednesday, July 24, 2019 8:21 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Cindy, Thanks for the response. I do locate in China, and I have setup a proxy to overcome the connectivity issue. That was verified to work by installing controllers, and by executing docker pull image on controller nodes. The issue I am experiencing is that 'a worker node' have no access to oam network but trying to pull docker image while I am trying to deploy openstack helm using 'system application-apply stx-openstack' commands. Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, July 23, 2019 10:00 PM To: Yang, Bin; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi, Bin I guess you're located in China and have firewall blocking some docker image, right? You may have to setup your local registry in your lab. Thx. - cindy From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Tuesday, July 23, 2019 8:53 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Dear experts, I have been trying to install stx milestone3 over virtualbox with standard modes, I managed to install 2 controller nodes and 1 worker node, and provisioned them according to the instructions on wiki. Then I uploaded openstack helm charts then apply it, then it failed to accomplish that operation. As I investigate the root cause, I found out that the worker node is not in ready status: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl get nodes NAME STATUS ROLES AGE VERSION compute-0 NotReady 3h32m v1.13.5 controller-0 Ready master 14d v1.13.5 controller-1 Ready master 13d v1.13.5 The root cause is that worker node is lacking of access to external network directly, hence unable to pull docker image via the proxy: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl -n kube-system describe pods kube-proxy-pftk8 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 38s (x462 over 3h36m) kubelet, compute-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) compute-0:~# docker pull k8s.gcr.io/pause:3.1 Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) The traceroute to the docker proxy : compute-0:~# cat /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://128.224.230.5:9090" Environment="HTTPS_PROXY=http://128.224.230.5:9090" Environment="NO_PROXY=localhost,127.0.0.1,registry.local,192.168.204.2,192.168.204.3,10.0.2.25,10.0.2.26,192.168.204.4,10.0.2.27" compute-0:~# traceroute 128.224.230.5 traceroute to 128.224.230.5 (128.224.230.5), 30 hops max, 60 byte packets 1 controller-0 (192.168.204.3) 0.200 ms 0.256 ms 0.208 ms 2 * * * 3 * * * Can anybody help explain how a worker node without access to external network could pull docker images? How can I workaround this issue? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Tuesday, July 23, 2019 7:49 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Yang at windriver.com Wed Jul 24 02:16:39 2019 From: Bin.Yang at windriver.com (Yang, Bin) Date: Wed, 24 Jul 2019 02:16:39 +0000 Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35FE76BF@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Yi, Ok, then perhaps I need to test it in other ways. Thanks for your help Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Wednesday, July 24, 2019 10:08 AM To: Yang, Bin; Xie, Cindy; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Bin, I can't ping my local registry too. I didn't check iptables rules of controller nodes. But I guess icmp was blocked by some rules. I can connect to external hosts by ssh from my worker nodes. You can refer to below link on local registry deploy. https://docs.docker.com/registry/deploying/ Thanks. Yi From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Wednesday, July 24, 2019 9:47 AM To: Wang, Yi C ; Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node And could you kindly educate me how to setup a local docker register over oam network? The network latency results a lot of issue during my installation of stx controller nodes. Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Yang, Bin Sent: Wednesday, July 24, 2019 9:46 AM To: 'Wang, Yi C'; Xie, Cindy; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Yi, Thanks for the response. I did test if worker node could access to oam network by ping the docker proxy ip, and the result turned out that worker node cannot reach to that ip. I did the same test over controllers and the controllers could reach to that ip. So I guess this is not an issue of docker settings, it might be something wrong with controllers if controllers are deemed to NAT the traffic from worker nodes to OAM network. Any suggestion on how to check that? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Wang, Yi C [mailto:yi.c.wang at intel.com] Sent: Wednesday, July 24, 2019 9:26 AM To: Yang, Bin; Xie, Cindy; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Bin, Worker nodes can access external network through controller nodes. In our deployment, we use a local registry which is accessible via OAM. Worker nodes do can pull images from the local registry. So I suggest you check 1. if your worker nodes can access external network through oam 2. check the docker configuration "/etc/docker/daemon.json" on your worker nodes. Thanks. Yi From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Wednesday, July 24, 2019 8:21 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi Cindy, Thanks for the response. I do locate in China, and I have setup a proxy to overcome the connectivity issue. That was verified to work by installing controllers, and by executing docker pull image on controller nodes. The issue I am experiencing is that 'a worker node' have no access to oam network but trying to pull docker image while I am trying to deploy openstack helm using 'system application-apply stx-openstack' commands. Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, July 23, 2019 10:00 PM To: Yang, Bin; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Hi, Bin I guess you're located in China and have firewall blocking some docker image, right? You may have to setup your local registry in your lab. Thx. - cindy From: Yang, Bin [mailto:Bin.Yang at windriver.com] Sent: Tuesday, July 23, 2019 8:53 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node Dear experts, I have been trying to install stx milestone3 over virtualbox with standard modes, I managed to install 2 controller nodes and 1 worker node, and provisioned them according to the instructions on wiki. Then I uploaded openstack helm charts then apply it, then it failed to accomplish that operation. As I investigate the root cause, I found out that the worker node is not in ready status: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl get nodes NAME STATUS ROLES AGE VERSION compute-0 NotReady 3h32m v1.13.5 controller-0 Ready master 14d v1.13.5 controller-1 Ready master 13d v1.13.5 The root cause is that worker node is lacking of access to external network directly, hence unable to pull docker image via the proxy: [sysadmin at controller-0 ~(keystone_admin)]$ kubectl -n kube-system describe pods kube-proxy-pftk8 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 38s (x462 over 3h36m) kubelet, compute-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) compute-0:~# docker pull k8s.gcr.io/pause:3.1 Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) The traceroute to the docker proxy : compute-0:~# cat /etc/systemd/system/docker.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://128.224.230.5:9090" Environment="HTTPS_PROXY=http://128.224.230.5:9090" Environment="NO_PROXY=localhost,127.0.0.1,registry.local,192.168.204.2,192.168.204.3,10.0.2.25,10.0.2.26,192.168.204.4,10.0.2.27" compute-0:~# traceroute 128.224.230.5 traceroute to 128.224.230.5 (128.224.230.5), 30 hops max, 60 byte packets 1 controller-0 (192.168.204.3) 0.200 ms 0.256 ms 0.208 ms 2 * * * 3 * * * Can anybody help explain how a worker node without access to external network could pull docker images? How can I workaround this issue? Thanks Best Regards, Bin Yang, Solution Engineering Team, Wind River ONAP Multi-VIM/Cloud PTL Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189 Skype: yangbincs993 From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Tuesday, July 23, 2019 7:49 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Wed Jul 24 03:18:46 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Wed, 24 Jul 2019 03:18:46 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 In-Reply-To: <93814834B4855241994F290E959305C7530AF4DC@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C7530AF4DC@SHSMSX104.ccr.corp.intel.com> Message-ID: <16079ED3-2F0B-4735-A77A-19D0E3C4F8A0@intel.com> Hi Zhipeng, We used the EB (I just learn today that this means Engineer Build) from your link on a Virtual Environment. We are able to launch instances from Image, Volume Snapshots and Heath Template: Sanity-Test.Sanity-OpenStack.01-Instance-From-Image [2019-07-23T19:45:20.246Z] Launch Instances :: Launch Cirros and Centos instances. | PASS | Sanity-Test.Sanity-OpenStack.03-Instance-From-Snapshot [2019-07-23T20:09:29.651Z] Launch Instances :: Launch Cirros instances from snapshot. | PASS | Sanity-Test.Sanity-OpenStack.04-Instance-From-Heat-Template [2019-07-23T20:58:20.620Z] Create Instance Trough Stack :: Create a Cirros instance using a h... | PASS | Being that the review: https://review.opendev.org/#/c/672287/ is already merged, this will also be tested on baremetal in the next Daily Sanity. Thanks a lot for checking and fixing this. Cristopher Lemus From: "Liu, ZhipengS" Date: Tuesday, July 23, 2019 at 9:05 AM To: "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" Cc: "Lemus Contreras, Cristopher J" Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Hi Maria G and Cristopher Lemus, Could you help to verify this block issue with below EB, which Revert "Use true for force_raw_images when using ceph image backend" https://review.opendev.org/#/c/672287/ You can got EB from below place. http://dcp-dev.intel.com/pub/starlingx/stx-eng-build/163/outputs/ Thanks! Zhipeng From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: 2019年7月23日 7:49 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers – Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Wed Jul 24 06:16:01 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 24 Jul 2019 06:16:01 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 In-Reply-To: <16079ED3-2F0B-4735-A77A-19D0E3C4F8A0@intel.com> References: <93814834B4855241994F290E959305C7530AF4DC@SHSMSX104.ccr.corp.intel.com> <16079ED3-2F0B-4735-A77A-19D0E3C4F8A0@intel.com> Message-ID: <93814834B4855241994F290E959305C7530AF781@SHSMSX104.ccr.corp.intel.com> Thanks Cristopher Lemus! Good to know that! Zhipeng From: Lemus Contreras, Cristopher J Sent: 2019年7月24日 11:19 To: Liu, ZhipengS ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Hi Zhipeng, We used the EB (I just learn today that this means Engineer Build) from your link on a Virtual Environment. We are able to launch instances from Image, Volume Snapshots and Heath Template: Sanity-Test.Sanity-OpenStack.01-Instance-From-Image [2019-07-23T19:45:20.246Z] Launch Instances :: Launch Cirros and Centos instances. | PASS | Sanity-Test.Sanity-OpenStack.03-Instance-From-Snapshot [2019-07-23T20:09:29.651Z] Launch Instances :: Launch Cirros instances from snapshot. | PASS | Sanity-Test.Sanity-OpenStack.04-Instance-From-Heat-Template [2019-07-23T20:58:20.620Z] Create Instance Trough Stack :: Create a Cirros instance using a h... | PASS | Being that the review: https://review.opendev.org/#/c/672287/ is already merged, this will also be tested on baremetal in the next Daily Sanity. Thanks a lot for checking and fixing this. Cristopher Lemus From: "Liu, ZhipengS" > Date: Tuesday, July 23, 2019 at 9:05 AM To: "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Lemus Contreras, Cristopher J" > Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Hi Maria G and Cristopher Lemus, Could you help to verify this block issue with below EB, which Revert "Use true for force_raw_images when using ceph image backend" https://review.opendev.org/#/c/672287/ You can got EB from below place. http://dcp-dev.intel.com/pub/starlingx/stx-eng-build/163/outputs/ Thanks! Zhipeng From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: 2019年7月23日 7:49 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22 (link) Status: RED =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers – Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 24 TCs FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] ----------------------------------------------------------------------------------- Create instance from Image or from Volume fails https://bugs.launchpad.net/starlingx/+bug/1837241 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jul 24 13:35:39 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 24 Jul 2019 13:35:39 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/24 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FE765C@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FE765C@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE8806@SHSMSX104.ccr.corp.intel.com> Agenda & notes for 7/23 meetings: - kernel minor verion upgrade status update (Shuai) consensus has been reached to use Option#1: upgrade the kernel to 3.10.0-957.21.3.el7 in master branch, will not merge before RC1. Will port those 3 patches over to release branch if test results is positive. Test has been done on std kernel; still have technical issue on RT kernel; expecting to have test results for RT kernel tomorrow; only test on AIO simplex & duplex, not yet on multi-node; use your 3 pending patches to generate eng-build from SH, no need to send your ISO over. need to run sanity testing from Shanghai. AR: Zhiguo talk to Yan Chen for how to sanity test running. ETA: end of next week. - stx 2.0 bug triage/review (Cindy) - stx.distro.others: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other 1832854: Alex uploaded logs from the kernel; analyzing and more instrumentation may required; 1836638: mem leak issue from RT kernel. setup environment & working on repro. 1837430: Bin use the kernel debuginfo after 7/19 for debug.CoreDump and debuginfo shall be provided. - stx.storage: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage&orderby=-importance&start=0 1836974: need update from Tingjie; 1833738: Bob is continue working on this - ETA for review end of this week 1834539 : Daniel has patch uploaded, installing and ETA to have fix today or tomorrow 1836075: Stefan: bug only appear on AIO-DX, suspect SM issue. WIP for fixing now. verifying the fix and test on real HW. - Opens (all) - none From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, July 23, 2019 9:45 PM To: 'starlingx-discuss at lists.starlingx.io' ; 'Rowsell, Brent' ; Wold, Saul Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/24 All, Please see agenda for 7/24 call below:   Agenda for 7/23 meetings:     - kernel minor verion upgrade status update (Shuai)     - stx 2.0 bug triage/review (Cindy)     - Opens (all)   Please feel free to add more topics in case I missed. Thx. - cindy     -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'zhaos'; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; Wold, Saul Cc: Hu, Wei W; Jones, Bruce E; 'Eslimi, Dariush'; Cobbley, David A; 'Zhi Zhi2 Chang'; Armstrong, Robert H; 'Waines, Greg'; 'Badea, Daniel'; 'Carlos Cebrian'; 'Seiler, Glenn'; Chen, Tingjie; 'Chen, Jacky'; Komiyama, Takeo; Gomez, Juan P; Peng Tan Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 24, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236     . Cadence and time slot: . Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: . Zoom link: https://zoom.us/j/342730236 . Dialing in from phone: . Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 . Meeting ID: 342 730 236 . International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: . https://etherpad.openstack.org/p/stx-distro-other       From Don.Penney at windriver.com Wed Jul 24 13:49:50 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 24 Jul 2019 13:49:50 +0000 Subject: [Starlingx-discuss] [build] Spec for layered build story available for review Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC152DE6F@ALA-MBD.corp.ad.wrs.com> Hi folks, For any interested parties, the spec describing the layered build story has been posted to Gerrit: https://review.opendev.org/672288 Comments or questions are welcomed. Cheers, Don. Don Penney, Developer, Wind River -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Wed Jul 24 14:42:07 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 24 Jul 2019 14:42:07 +0000 Subject: [Starlingx-discuss] Agenda: StarlingX bi-weekly networking sub-project meeting -- 07/25 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1571923@ALA-MBD.corp.ad.wrs.com> Agenda for tomorrow's networking meeting: - stx.2.0 networking bugs o Review bug status * A list of bugs are listed in the etherpad (below). * Bug Primes, please add a summary update to your bug prior to the meeting o Align importance/priority if needed - stx.3.0 networking test status - stx.3.0 We will be meeting at 9:15am Eastern on Thursday. Regards, Ghada -----Original Appointment----- From: Zhao, Forrest [mailto:forrest.zhao at intel.com] Sent: Wednesday, April 03, 2019 9:19 PM To: Zhao, Forrest; Qin, Kailun; Le, Huifeng; Xu, Chenjie; Guo, Ruijing; Mishra, Sharad D; Welch, Matt; Li, Cheng1; Peters, Matt; Khalil, Ghada; Jolliffe, Ian; Rowsell, Brent; Webster, Steven; Legacy, Allain Cc: Ochulor, Enyinna; Martinez Monroy, Elio; Waheed, Numan; Winnicki, Chris Subject: StarlingX bi-weekly networking sub-project meeting When: Thursday, July 25, 2019 9:15 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: Zoom link: https://zoom.us/j/342730236 Meeting agenda and minutes are captured at: https://etherpad.openstack.org/p/stx-networking Networking team wiki: https://wiki.openstack.org/wiki/StarlingX/Networking Thanks, Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Jul 24 19:23:11 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 24 Jul 2019 14:23:11 -0500 Subject: [Starlingx-discuss] Help: How to update StarlingX? In-Reply-To: <772516424.144004.1563895143503@communicator.strato.com> References: <772516424.144004.1563895143503@communicator.strato.com> Message-ID: On Tue, Jul 23, 2019 at 10:19 AM Marcel Schaible wrote: > > Hi, > > at the moment are running our system with a StarlingX version from around mid June. > > What is the recommended way to update our installation like e.g adding the StarlingX repository to our installation? In the past we where reinstalling everything, which is a real pain. > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss Hi Marcel Thanks a lot for your mail and inputs. Right now I am in the same position as you are and I am facing the same pain. I am interested to know more about the details of your requirements if you could please share. For me is an update from what I have installed now to the latest green ISO published in CENGN, is that something that might work for you? Regards Victor R From Bill.Zvonar at windriver.com Wed Jul 24 19:52:18 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 24 Jul 2019 19:52:18 +0000 Subject: [Starlingx-discuss] Community Call (July 24, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AB2269@ALA-MBD.corp.ad.wrs.com> - sanity - any red sanities since last Community meeting? - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.sanity - yes, we did - per Christopher issues with standard configuration - both issues are (hopefully) resolved - reviews in need of attention - nothing this week - documentation update (Abraham for Mike Tullis) - wiki cleanup, doc updates, 2.0 plan - working to get full alignment between docs and wiki changes - see the Google Sheet for the status of the conversion https://docs.google.com/spreadsheets/d/1UJjUttsWQRyauATrip0wKGIxSO7DvyetDKmPwvEIDaA/edit#gid=0 - send a Launchpad to address any further issues, requests - Mega Spec - https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.docs&tags=stx.2.0&project_group_id=86 - Containers project: One node configuration ( AIO-SX ) - https://bugs.launchpad.net/starlingx/+bug/1836575 - https://review.opendev.org/#/c/672171/ - Containers project: All in One Duplex configuration ( AIO-DX ) - https://bugs.launchpad.net/starlingx/+bug/1836574 - https://review.opendev.org/#/c/672192/ - Containers project: Standard, non storage, configuration ( Standard 2+2 ) - https://review.opendev.org/#/c/661659/ - Containers project: Standard, storage, configuration ( Standard 2+2+2 ) - https://review.opendev.org/#/c/663450/ - Containers project: Containerized Openstack FAQ - https://review.opendev.org/#/c/660154/ - Bare Metal (Ironic) Deployment Option - https://review.opendev.org/#/c/672194/ - Containers project: Building StarlingX Docker Images - To be started, being translated from the wiki - All-in-one Duplex with up to 4 Computes deployment guide - To be started - Bugs - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs - Completed Error links for starling project docs - https://bugs.launchpad.net/starlingx/+bug/1835207 - A total of 15 reviews are part of this Launchpad bug: - 9 reviews has been Merged. - 5 reviews are in Needs Workflow Label - On Hold IPv6 setup info missing from Install doc Edit - https://bugs.launchpad.net/starlingx/+bug/1836052 - In Progress meeting time on Wiki in UTC are often incorrect - https://bugs.launchpad.net/starlingx/+bug/1837418 - defect trend / gating launchpads - see Release Team proposal for handling critical/high/medium/low bugs before RC1, after RC1 and after release date - Critical: - must fix by release date (Release Build: Aug 23) - High: - must be fixed for the stx.2.0 release, but could be fixed *after* the release date (in a maintenance release) - fixes will be backported to stx.2.0 - Medium: - continue working until release date - fix as many as possible - defer to 3.0 after release date - fixes will be backported between RC1 and the release date - fixes will not be backported to stx.2.0 after the release date - Low: - optional, will be deferred to stx.3.0 at RC1, fixes will not be backported to stx.2.0 - there was general agreement on this proposal with some comments - Cindy asked about bugs in Incomplete state - Ghada said there's probably not a single 'policy' that can be applied to all Incompletes, the TL/PL have to take them on a case by case basis - Bill agreed but said that for the ones where the Reporter just isn't updating, send a note to Bill & he'll try to help get things moving again - Dean asked if the distinction between High & Medium is the backport policy - Ghada confirmed that the Highs would be backported to the release branch after the release date, Mediums will not - Saul asked to highlight this to the mailing list - Bill to respond to the existing thread on the mailing list (as well as including in the minutes of this meeting & tomorrow's release team meeting) - bitergia update: see https://etherpad.openstack.org/p/stx-bitergia - first contact update - mailing list responsiveness - see https://etherpad.openstack.org/p/stx-first-contact (at the bottom) - next first contact meeting is on July 25 at 9:30 eastern (https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190725T1330) - open actions from previous meetings - updates pending: - ACTION: Cindy to send her perspective on "severity" to the mailing list, let the discussion ensue - done - add mailing list ref here - ACTION: Yong to propose how we could formalize the process of assessing the impact of a bug from different perspectives - Yong to send his thoughts to the mailing list or here - ACTION: doc team to do an audit of the wikis to find pages that have stale data and/or isn't properly pointing to the docs site - in progress - ACTION: release team make the recommendation re: Blueprints for Backlog in the next TSC meeting - pending - on TSC etherpad for tomorrow's meeting - ACTION: Bill start checking if any 'new' people emails are going unresponded - see update here (at bottom): https://etherpad.openstack.org/p/stx-first-contact - in progress - to discuss in first contact SIG - ACTION: Dean find out what our options for increasing per mail size limit - done, Dean has doubled the size to 120K; see Dean's email: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-July/005366.html - ACTION: Bill follow up on status of bitergia changes - see Thierry's updates here: https://etherpad.openstack.org/p/stx-bitergia - in progress - for later: - ACTION: Frank update on the forecast for the Docker image list - see https://bugs.launchpad.net/starlingx/+bug/1834504, with build team now - no longer with Frank - ACTION: Frank to talk to CENGN about getting sufficient space (pending any other parameters from Scott) - CENGN to get back to Frank on this - re: feasability & cost - ACTION: Scott & Dean to talk about the mechanics for big files - pending Frank's discussions with CENGN - ACTION: Numan & Ada to sort out how aggregate regression reporting will be done (manual & automated) - they have booked a meeting to discuss - this is starting tomorrow (July 25) - automated & manual results will be included in the one report - ACTION: Numan/Yang arrange an automation framweork info session for the Community (in a few weeks after Yang's vacation) - tbd - ACTION: Bill check with Ian about the logistics/timing of a mid-cycle meeting - tbd, Bill to chase this down -----Original Message----- From: Zvonar, Bill Sent: Tuesday, July 23, 2019 8:58 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (July 24, 2019) Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - sanity - any red sanities since last Community meeting? - reviews in need of attention - defect trend / gating launchpads - bitergia update: see https://etherpad.openstack.org/p/stx-bitergia - first contact update - mailing list responsiveness - see https://etherpad.openstack.org/p/stx-first-contact (at the bottom) - open actions from previous meetings Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190724T1400 From Bill.Zvonar at windriver.com Wed Jul 24 20:17:34 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 24 Jul 2019 20:17:34 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35FE5587@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2B35DD6E@FMSMSX125.amr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FE52F4@SHSMSX104.ccr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2B35DDE5@FMSMSX125.amr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FE5587@SHSMSX104.ccr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AB229A@ALA-MBD.corp.ad.wrs.com> As discussed in the Community call today, Ghada & I are proposing the following handling of bugs of the different priorities ("Importance" in Launchpad) as follows. This will be discussed again tomorrow during the Release Team meeting as well... - Critical: - must fix by release date (Release Build: Aug 23) - High: - must be fixed for the stx.2.0 release, but could be fixed *after* the release date (in a maintenance release) - fixes will be backported to stx.2.0 - Medium: - continue working until release date - fix as many as possible - defer to 3.0 after release date - fixes will be backported between RC1 and the release date - fixes will not be backported to stx.2.0 after the release date - Low: - optional, will be deferred to stx.3.0 at RC1, fixes will not be backported to stx.2.0 Bill... -----Original Message----- From: Xie, Cindy Sent: Friday, July 19, 2019 7:56 PM To: Perez Carranza, Jose ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] bug severity and priority Another idea is to using mailing list: each day, triage lead sends out a list of "new" bugs need triage and sub-project leads response in mailing list so that we keep the information public, we can assign bugs to appropriate owners (or people volunteer). Thx. - cindy -----Original Message----- From: Perez Carranza, Jose Sent: Friday, July 19, 2019 11:02 PM To: Xie, Cindy ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] bug severity and priority > -----Original Message----- > From: Xie, Cindy > Sent: Friday, July 19, 2019 9:35 AM > To: Perez Carranza, Jose ; Saul Wold > ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] bug severity and priority > > Jose, > Just to clarify: for the weekly bug triage meeting, you only ask to > triage the new bugs, right? Yes, only the new ones should be triaged. > > My concern is about the triage frequency: right now, the new bugs are > triaged almost on daily basis, mostly by Ghada by consulting technical expert. > If we switch to a triage meeting, now sure how the new LP can be > handled timely. > > But agree that having a triage meeting is a good idea. > Thx. - cindy > To mitigate this concern as Saul pointed out we should ensure to have a "triage section"section on subproject meeting but ensuring all the stakeholders for the specific bugs are online to provide feedback. > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: Friday, July 19, 2019 8:16 PM > To: Saul Wold ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] bug severity and priority > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Thursday, July 18, 2019 10:57 PM > > To: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] bug severity and priority > > > > > > Folks, > > > > As I mentioned in a prior email about a previous project (Yocto > > Project), we were also time-based (every 6 months). We defined > > Importance [0] of the bug based on Severity (chosen by submitter) > > and Priority (assigned during a triage process). We had 5 Priory > > levels in > > Bugzilla: High, Medium+, Medium, Low and Undecided, these would map > to > > our Critical, High, Medium, Low and Undecided. > > Those triage meetings were very helpful because they were live > discussions about the bugs with all the stockholders. I think we > should consider to have a weekly meeting just to triage bugs. > > Regards, > José > > > > This clearly frames it based on Milestones and releases due to the > > time based nature of the Yocto Project. Notice that the > > High/Critical is the only one that is truly "gating" or > > milestone/release blocker, the > > Medium+, our High, won't block a milestone but be should be fixed > > Medium+for a > > release, but could be a dot.dot soon after the release. > > > > > Importance > > > The Importance of the bug is defined by its Priority and Severity. > > > The > > Priority classifies the bug's fixing order. In other words, how soon > > will it get fixed relative to other bugs? Priorities are set during > > the bug Triage meeting and cannot be changed by the user. The > > priority appears to the left of the Severity field. Here are the > > values that Priority can be set to during the Triage > > meeting: > > > > > > High -- Bug fixing is planned immediately for the target milestone. > > Milestone cannot be released if there is a high bug opened against > > the milestone. High priority issues cause major functional loss of a > > specific feature that is POR for the up-comping milestone. These > > issues are easily hit by the user and greatly impact the user > > experience or customer requirements. Finally, these issues could be > > urgent security fixes that need to be corrected in a prior release. > > The bug assignee is not to change the target milestones for High > > bugs > without prior approval of the Triage team. > > > Medium+ -- Bug fixing is planned before the milestone and must be > > > Medium+ fixed or > > have a solution planned before the release is finalized. These > > issues are not show-stoppers but have somewhat significant impact to > > system functions and user experience. > > > Medium -- These are important issues we keep track and try to plan > > > fixing > > for the release. They have limited impact for the system functions > > and releases. > > > Low -- Bug fixing is only done opportunistically. Generally not > > > planned for > > the up-coming project release. Issues that are not a POR feature > > request, or are hard to reproduce fall into this category. > > > Undecided -- These issues are newly reported and are undecided > > > before > > Triage. Issues that are a feature request, which isn't approved for > > future release yet. This issue will be changed to have an actual > > Priority after the Triage team approves it. > > > Note: High impact but Low Priority bugs can be documented in the > > > release > > notes. > > > > > > The Severity indicates how much the issue impacted the person > > > reporting > > the bug. Severity can be categorized into five areas. > > > > > > Critical -- Crashes, hang, loss of data, negative impact to other > > > components, > > memory leak etc. > > > Major -- Major loss of functionality of POR. > > > Normal -- Regular issue, some loss of functionality under certain > > circumstance. This is the default Severity. > > > Minor -- Minor loss of functionality, or issues with easy > > > workaround > > available. > > > Enhancement -- Request for enhancement or new feature to be worked. > > > > I hope the helps by provide a different viewpoint from another project. > > > > Sau! > > > > [0] > > https://wiki.yoctoproject.org/wiki/Bugzilla_Configuration_and_Bug_Tr > > ac > > king > > #Importance > > > > On 7/17/19 3:41 AM, Zvonar, Bill wrote: > > > Hi Cindy, > > > > > > Thought about this some more, sorry it took me so long to respond > further. > > > > > > I agree with splitting out the definitions of release > > > priority/importance > > (which is subjective) from the technical severity (which is I'd say > > much less subjective). > > > > > > Do we agree that one of the key next steps is to define the > > > severity levels > > for defects in different domains? > > > > > > Once we have those agreed and written down somewhere, they can be > > used as guidance for people that are opening Launchpads, and for > > those that screen them. Someone will note that some bugs cross > > domains, so it's not as simple as looking at one set of severity > > definitions, but let's cross that bridge next. > > > > > > Then, if we've got general alignment on the severity definitions > > > per domain, > > we can sort out what to use as a QRC formula for a release, I think. > > > > > > Btw, it'd be nice if Launchpad had a field for Severity, so we > > > could track that > > more easily - does anybody know if we can just request this & get it > > added as a custom field? > > > > > > Bill... > > > > > > -----Original Message----- > > > From: Xie, Cindy > > > Sent: Wednesday, July 10, 2019 7:13 PM > > > To: Zvonar, Bill ; starlingx- > > discuss at lists.starlingx.io; Khalil, Ghada > > > > > Subject: RE: bug severity and priority > > > > > > Bill, > > > I definitely agree that not all Medium shall be pushed to stx.3.0, > > > this needs > > to be assessed carefully. But if we combine the severity and > > priority together, then this decision needs to put resource factor > > in consideration > as well. > > > > > > Actually, I think it's confusing of calling individual LP "gating" > > > - I understand > > that we want to get the product quality to a good shape and want to > > get bugs fixed as many as possible before we ship it. I will suggest > > to use defects# as part of release criteria (QRC). Example could be: > > > > > > Number of Critical P1 defects Zero > > > Number of High P2 defects < x > > > Number of Medium P3 defects < y > > > > > > And the only thing we need to agree on is the "x" and "y". It > > > makes TSC or > > release team to make decision easier. The QRC needs to be agreed > > earlier instead of right before the release decision shall be made. > > This way, we can really direct our engineering resource working on > > the most important items and we all have an agreed common goal. > > > > > > Thanks. - cindy > > > > > > -----Original Message----- > > > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > > > Sent: Thursday, July 11, 2019 1:39 AM > > > To: Xie, Cindy ; > > > starlingx-discuss at lists.starlingx.io; > > Khalil, Ghada > > > Subject: RE: bug severity and priority > > > > > > Hi Cindy, > > > > > > Thanks for sending this, I think this gives us something to start > > > the > > discussion. > > > > > > However we decide to align on severity/priority (I'll comment on > > > that more > > later, need to think about it more), I think we need to be careful > > before we move all mediums to 3.0, it may be too much of a Gordian > > knot > solution. > > > > > > I think we need to assess the mediums (as Yong suggested earlier) > > > to say > > why they should or should not be in 2.0. I also think this may help > > us sort out what our gating criteria are. > > > > > > Bill... > > > > > > -----Original Message----- > > > From: Xie, Cindy > > > Sent: Wednesday, July 10, 2019 10:42 AM > > > To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill > > ; Khalil, Ghada > > > > > Subject: bug severity and priority > > > > > > Bill/Ghada, > > > I am sending out my definition of bug severity and priority: > > > > > > Bug Exposure or Severity Definition > > > 1- Critical Product or key feature is not usable for intended purpose. > > > 2- High Product or key feature is not reliably usable for > > intended purpose or use is significantly impaired > > > 3 - Medium Product or key feature is usable provided by a workaround > > > 4 - Low Tolerable impact to user experience with minimal > > service and support costs > > > > > > Bug Priority Definition > > > P1 - Stopper Resolution of this defect takes precedence over other > defects > > and most other development activities. This level is used to focus > > maximum development team resources to resolve a defect in the > > shortest possible timeframe. > > > P2 - High Resolution of the defect has precedence over resolving other > > defects with lesser classifications of priority. The urgency to fix > > a > > P2 priority defect is imminent. - P2 priority defects are intended > > to be resolved by the next planned external release of the software. > > > P3 - Medium Resolution of the defect has precedence over > resolving other > > defects with lesser classifications of priority. - P3 priority > > defects must have a planned timeframe for a verified resolution. > > > P4 - Low Resolution of the defect has least urgency to resolve, P4 > > priority defects may or may not have plans to resolve. > > > > > > Let's discuss this and agree how we'd like to use them. My > > > suggestion for > > current "Medium" is to we can mark them as "stx.3.0" and then in the > > beginning of stx.3, they can move Priority to "high" due to the fact > > they want to get them fixed in 3.0. > > > > > > But the bug severity should never change because they are standard. > > > > > > Thx. - cindy > > > > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu > > > ss > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From michael.l.tullis at intel.com Wed Jul 24 21:16:57 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 24 Jul 2019 21:16:57 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 7/24/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B9B450@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Jul 24 22:33:31 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 24 Jul 2019 22:33:31 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190724 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-24 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Jul 25 03:56:21 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 25 Jul 2019 03:56:21 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190724 In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE9465@SHSMSX104.ccr.corp.intel.com> Nice to see our sanity now back to green! :) From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, July 25, 2019 6:34 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190724 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-24 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Jul 25 03:58:18 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 25 Jul 2019 03:58:18 +0000 Subject: [Starlingx-discuss] bug severity and priority In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AB229A@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35FDBCAE@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007A87057@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35FDC23D@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AAF88B@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2B35DD6E@FMSMSX125.amr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FE52F4@SHSMSX104.ccr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2B35DDE5@FMSMSX125.amr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35FE5587@SHSMSX104.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AB229A@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35FE947D@SHSMSX104.ccr.corp.intel.com> Thanks Bill and Ghada drive the discussion thread to a closure. I think this is a great balance btw "time-based release" strategy we agreed earlier and also put quality as top priority. Thanks! - cindy -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Thursday, July 25, 2019 4:18 AM To: Xie, Cindy ; Perez Carranza, Jose ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] bug severity and priority As discussed in the Community call today, Ghada & I are proposing the following handling of bugs of the different priorities ("Importance" in Launchpad) as follows. This will be discussed again tomorrow during the Release Team meeting as well... - Critical: - must fix by release date (Release Build: Aug 23) - High: - must be fixed for the stx.2.0 release, but could be fixed *after* the release date (in a maintenance release) - fixes will be backported to stx.2.0 - Medium: - continue working until release date - fix as many as possible - defer to 3.0 after release date - fixes will be backported between RC1 and the release date - fixes will not be backported to stx.2.0 after the release date - Low: - optional, will be deferred to stx.3.0 at RC1, fixes will not be backported to stx.2.0 Bill... -----Original Message----- From: Xie, Cindy Sent: Friday, July 19, 2019 7:56 PM To: Perez Carranza, Jose ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] bug severity and priority Another idea is to using mailing list: each day, triage lead sends out a list of "new" bugs need triage and sub-project leads response in mailing list so that we keep the information public, we can assign bugs to appropriate owners (or people volunteer). Thx. - cindy -----Original Message----- From: Perez Carranza, Jose Sent: Friday, July 19, 2019 11:02 PM To: Xie, Cindy ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] bug severity and priority > -----Original Message----- > From: Xie, Cindy > Sent: Friday, July 19, 2019 9:35 AM > To: Perez Carranza, Jose ; Saul Wold > ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] bug severity and priority > > Jose, > Just to clarify: for the weekly bug triage meeting, you only ask to > triage the new bugs, right? Yes, only the new ones should be triaged. > > My concern is about the triage frequency: right now, the new bugs are > triaged almost on daily basis, mostly by Ghada by consulting technical expert. > If we switch to a triage meeting, now sure how the new LP can be > handled timely. > > But agree that having a triage meeting is a good idea. > Thx. - cindy > To mitigate this concern as Saul pointed out we should ensure to have a "triage section"section on subproject meeting but ensuring all the stakeholders for the specific bugs are online to provide feedback. > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: Friday, July 19, 2019 8:16 PM > To: Saul Wold ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] bug severity and priority > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Thursday, July 18, 2019 10:57 PM > > To: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] bug severity and priority > > > > > > Folks, > > > > As I mentioned in a prior email about a previous project (Yocto > > Project), we were also time-based (every 6 months). We defined > > Importance [0] of the bug based on Severity (chosen by submitter) > > and Priority (assigned during a triage process). We had 5 Priory > > levels in > > Bugzilla: High, Medium+, Medium, Low and Undecided, these would map > to > > our Critical, High, Medium, Low and Undecided. > > Those triage meetings were very helpful because they were live > discussions about the bugs with all the stockholders. I think we > should consider to have a weekly meeting just to triage bugs. > > Regards, > José > > > > This clearly frames it based on Milestones and releases due to the > > time based nature of the Yocto Project. Notice that the > > High/Critical is the only one that is truly "gating" or > > milestone/release blocker, the > > Medium+, our High, won't block a milestone but be should be fixed > > Medium+for a > > release, but could be a dot.dot soon after the release. > > > > > Importance > > > The Importance of the bug is defined by its Priority and Severity. > > > The > > Priority classifies the bug's fixing order. In other words, how soon > > will it get fixed relative to other bugs? Priorities are set during > > the bug Triage meeting and cannot be changed by the user. The > > priority appears to the left of the Severity field. Here are the > > values that Priority can be set to during the Triage > > meeting: > > > > > > High -- Bug fixing is planned immediately for the target milestone. > > Milestone cannot be released if there is a high bug opened against > > the milestone. High priority issues cause major functional loss of a > > specific feature that is POR for the up-comping milestone. These > > issues are easily hit by the user and greatly impact the user > > experience or customer requirements. Finally, these issues could be > > urgent security fixes that need to be corrected in a prior release. > > The bug assignee is not to change the target milestones for High > > bugs > without prior approval of the Triage team. > > > Medium+ -- Bug fixing is planned before the milestone and must be > > > Medium+ fixed or > > have a solution planned before the release is finalized. These > > issues are not show-stoppers but have somewhat significant impact to > > system functions and user experience. > > > Medium -- These are important issues we keep track and try to plan > > > fixing > > for the release. They have limited impact for the system functions > > and releases. > > > Low -- Bug fixing is only done opportunistically. Generally not > > > planned for > > the up-coming project release. Issues that are not a POR feature > > request, or are hard to reproduce fall into this category. > > > Undecided -- These issues are newly reported and are undecided > > > before > > Triage. Issues that are a feature request, which isn't approved for > > future release yet. This issue will be changed to have an actual > > Priority after the Triage team approves it. > > > Note: High impact but Low Priority bugs can be documented in the > > > release > > notes. > > > > > > The Severity indicates how much the issue impacted the person > > > reporting > > the bug. Severity can be categorized into five areas. > > > > > > Critical -- Crashes, hang, loss of data, negative impact to other > > > components, > > memory leak etc. > > > Major -- Major loss of functionality of POR. > > > Normal -- Regular issue, some loss of functionality under certain > > circumstance. This is the default Severity. > > > Minor -- Minor loss of functionality, or issues with easy > > > workaround > > available. > > > Enhancement -- Request for enhancement or new feature to be worked. > > > > I hope the helps by provide a different viewpoint from another project. > > > > Sau! > > > > [0] > > https://wiki.yoctoproject.org/wiki/Bugzilla_Configuration_and_Bug_Tr > > ac > > king > > #Importance > > > > On 7/17/19 3:41 AM, Zvonar, Bill wrote: > > > Hi Cindy, > > > > > > Thought about this some more, sorry it took me so long to respond > further. > > > > > > I agree with splitting out the definitions of release > > > priority/importance > > (which is subjective) from the technical severity (which is I'd say > > much less subjective). > > > > > > Do we agree that one of the key next steps is to define the > > > severity levels > > for defects in different domains? > > > > > > Once we have those agreed and written down somewhere, they can be > > used as guidance for people that are opening Launchpads, and for > > those that screen them. Someone will note that some bugs cross > > domains, so it's not as simple as looking at one set of severity > > definitions, but let's cross that bridge next. > > > > > > Then, if we've got general alignment on the severity definitions > > > per domain, > > we can sort out what to use as a QRC formula for a release, I think. > > > > > > Btw, it'd be nice if Launchpad had a field for Severity, so we > > > could track that > > more easily - does anybody know if we can just request this & get it > > added as a custom field? > > > > > > Bill... > > > > > > -----Original Message----- > > > From: Xie, Cindy > > > Sent: Wednesday, July 10, 2019 7:13 PM > > > To: Zvonar, Bill ; starlingx- > > discuss at lists.starlingx.io; Khalil, Ghada > > > > > Subject: RE: bug severity and priority > > > > > > Bill, > > > I definitely agree that not all Medium shall be pushed to stx.3.0, > > > this needs > > to be assessed carefully. But if we combine the severity and > > priority together, then this decision needs to put resource factor > > in consideration > as well. > > > > > > Actually, I think it's confusing of calling individual LP "gating" > > > - I understand > > that we want to get the product quality to a good shape and want to > > get bugs fixed as many as possible before we ship it. I will suggest > > to use defects# as part of release criteria (QRC). Example could be: > > > > > > Number of Critical P1 defects Zero > > > Number of High P2 defects < x > > > Number of Medium P3 defects < y > > > > > > And the only thing we need to agree on is the "x" and "y". It > > > makes TSC or > > release team to make decision easier. The QRC needs to be agreed > > earlier instead of right before the release decision shall be made. > > This way, we can really direct our engineering resource working on > > the most important items and we all have an agreed common goal. > > > > > > Thanks. - cindy > > > > > > -----Original Message----- > > > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > > > Sent: Thursday, July 11, 2019 1:39 AM > > > To: Xie, Cindy ; > > > starlingx-discuss at lists.starlingx.io; > > Khalil, Ghada > > > Subject: RE: bug severity and priority > > > > > > Hi Cindy, > > > > > > Thanks for sending this, I think this gives us something to start > > > the > > discussion. > > > > > > However we decide to align on severity/priority (I'll comment on > > > that more > > later, need to think about it more), I think we need to be careful > > before we move all mediums to 3.0, it may be too much of a Gordian > > knot > solution. > > > > > > I think we need to assess the mediums (as Yong suggested earlier) > > > to say > > why they should or should not be in 2.0. I also think this may help > > us sort out what our gating criteria are. > > > > > > Bill... > > > > > > -----Original Message----- > > > From: Xie, Cindy > > > Sent: Wednesday, July 10, 2019 10:42 AM > > > To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill > > ; Khalil, Ghada > > > > > Subject: bug severity and priority > > > > > > Bill/Ghada, > > > I am sending out my definition of bug severity and priority: > > > > > > Bug Exposure or Severity Definition > > > 1- Critical Product or key feature is not usable for intended purpose. > > > 2- High Product or key feature is not reliably usable for > > intended purpose or use is significantly impaired > > > 3 - Medium Product or key feature is usable provided by a workaround > > > 4 - Low Tolerable impact to user experience with minimal > > service and support costs > > > > > > Bug Priority Definition > > > P1 - Stopper Resolution of this defect takes precedence over other > defects > > and most other development activities. This level is used to focus > > maximum development team resources to resolve a defect in the > > shortest possible timeframe. > > > P2 - High Resolution of the defect has precedence over resolving other > > defects with lesser classifications of priority. The urgency to fix > > a > > P2 priority defect is imminent. - P2 priority defects are intended > > to be resolved by the next planned external release of the software. > > > P3 - Medium Resolution of the defect has precedence over > resolving other > > defects with lesser classifications of priority. - P3 priority > > defects must have a planned timeframe for a verified resolution. > > > P4 - Low Resolution of the defect has least urgency to resolve, P4 > > priority defects may or may not have plans to resolve. > > > > > > Let's discuss this and agree how we'd like to use them. My > > > suggestion for > > current "Medium" is to we can mark them as "stx.3.0" and then in the > > beginning of stx.3, they can move Priority to "high" due to the fact > > they want to get them fixed in 3.0. > > > > > > But the bug severity should never change because they are standard. > > > > > > Thx. - cindy > > > > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu > > > ss > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Thu Jul 25 15:19:01 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 25 Jul 2019 10:19:01 -0500 Subject: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX In-Reply-To: <93814834B4855241994F290E959305C7530AF3B3@SHSMSX104.ccr.corp.intel.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FC152D0F3@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C7530AF17A@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC152D3B5@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C7530AF38E@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC152D75E@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C7530AF3B3@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Shuquan Months ago we had the open to have the capability in starling x to take a personal/upstream change to nova and create a new image w/o the need to rebuild all the images/project. Don point about this new tool: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Incremental_Image_Updates Which allowed making this functionality. Please let us know if this fulfills your needs and if not, what else can be done on the build side Thanks Victor R On Tue, Jul 23, 2019 at 1:16 AM Liu, ZhipengS wrote: > > Hi Don, > Thanks for your help! > Now it works as below! Not sure why it did not work yesterday. > > ~/starlingx/cgcs-root/build-tools/build-docker-images$ bash update-stx-image.sh --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 --module-src ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova --user zhipengl > ......... > > Successfully built nova > Installing collected packages: nova > Found existing installation: nova 19.0.1.dev116 > Uninstalling nova-19.0.1.dev116: > Created temporary directory: /tmp/pip-uninstall-KrVTRI > Removing file or directory /var/lib/openstack/bin/nova-api > Removing file or directory /var/lib/openstack/bin/nova-api-metadata > Removing file or directory /var/lib/openstack/bin/nova-api-os-compute > Removing file or directory /var/lib/openstack/bin/nova-api-wsgi > Removing file or directory /var/lib/openstack/bin/nova-cells > Removing file or directory /var/lib/openstack/bin/nova-compute > Removing file or directory /var/lib/openstack/bin/nova-conductor > Removing file or directory /var/lib/openstack/bin/nova-console > Removing file or directory /var/lib/openstack/bin/nova-consoleauth > Removing file or directory /var/lib/openstack/bin/nova-dhcpbridge > Removing file or directory /var/lib/openstack/bin/nova-manage > Removing file or directory /var/lib/openstack/bin/nova-metadata-wsgi > Removing file or directory /var/lib/openstack/bin/nova-network > Removing file or directory /var/lib/openstack/bin/nova-novncproxy > Removing file or directory /var/lib/openstack/bin/nova-placement-api > Removing file or directory /var/lib/openstack/bin/nova-policy > Removing file or directory /var/lib/openstack/bin/nova-rootwrap > Removing file or directory /var/lib/openstack/bin/nova-rootwrap-daemon > Removing file or directory /var/lib/openstack/bin/nova-scheduler > Removing file or directory /var/lib/openstack/bin/nova-serialproxy > Removing file or directory /var/lib/openstack/bin/nova-spicehtml5proxy > Removing file or directory /var/lib/openstack/bin/nova-status > Removing file or directory /var/lib/openstack/bin/nova-xvpvncproxy > Created temporary directory: /var/lib/openstack/etc/~ova > Removing file or directory /var/lib/openstack/etc/nova/ > Created temporary directory: /var/lib/openstack/lib/python2.7/site-packages/~ova-19.0.1.dev116.dist-info > Removing file or directory /var/lib/openstack/lib/python2.7/site-packages/nova-19.0.1.dev116.dist-info/ > Created temporary directory: /var/lib/openstack/lib/python2.7/site-packages/~ova > Removing file or directory /var/lib/openstack/lib/python2.7/site-packages/nova/ > Successfully uninstalled nova-19.0.1.dev116 > Successfully installed nova-19.0.1.dev117 > Cleaning up... > Removed build tracker '/tmp/pip-req-tracker-OryF4V' > + '[' 0 -ne 0 ']' > + run_customization_script > + '[' -x /image-update/customize.sh ']' > + exit 0 > sha256:89c9ae9e438412f08f5ae0b85e562e657cf78542e763c595c52083bc574f9b31 > Updated image: zhipengl/stx-nova:master-centos-stable-20190715T233000Z.1 > > Zhipeng > > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: 2019年7月23日 12:06 > To: Liu, ZhipengS ; Dean Troyer ; starlingx-discuss at lists.starlingx.io > Cc: zhu.boxiang at 99cloud.net > Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX > > The extra text in the output of your --version vs mine is concerning, and I'm thinking that's getting picked up by the script when the hardcode_python_module_version function runs and causing some corruption. It appears to be INFO logs, which don't get printed to stdout by default on our systems. The function is expecting the only thing on stdout when running the --version command is the version itself. > > Is there some python config file on your host that's setting the default logging level to logging.INFO or logging.DEBUG maybe, instead of logging.WARN? > > -----Original Message----- > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > Sent: Monday, July 22, 2019 11:22 PM > To: Penney, Don; Dean Troyer; starlingx-discuss at lists.starlingx.io > Cc: zhu.boxiang at 99cloud.net > Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX > > Hi Don, > > ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova$ python ./setup.py --version > /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' > warnings.warn(msg) > /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' > warnings.warn(msg) > > Installed /home/wrsroot/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nova/.eggs/pbr-5.4.1-py2.7.egg > [pbr] Generating ChangeLog > 19.0.1.dev117 > > Zhipeng > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: 2019年7月22日 21:59 > To: Liu, ZhipengS ; Dean Troyer ; starlingx-discuss at lists.starlingx.io > Cc: zhu.boxiang at 99cloud.net > Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX > > Hi Zhipeng, > > Can you verify that you can run the following in your cloned repo? > > [/localdisk/loadbuild/dpenney/nova-update/stx-nova]$ python ./setup.py --version > 19.0.1.dev116 > > > > -----Original Message----- > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > Sent: Monday, July 22, 2019 3:15 AM > To: Penney, Don; Dean Troyer; starlingx-discuss at lists.starlingx.io > Cc: zhu.boxiang at 99cloud.net > Subject: RE: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX > > Hi Don, > > I failed to update image. Could you give me some help, thanks! > > ~/starlingx/cgcs-root/build-tools/build-docker-images$ bash update-stx-image.sh \ > > --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ > > --module-src > > ~/starlingx/workspace/localdisk/loadbuild/zhipengl/nova-update/stx-nov > > a --user zhipengl > ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images > ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images > Running: wget http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar > --2019-07-22 02:57:20-- http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels//stx-centos-stable-wheels.tar > Resolving mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)... 135.84.104.40 Connecting to mirror.starlingx.cengn.ca (mirror.starlingx.cengn.ca)|135.84.104.40|:80... connected. > HTTP request sent, awaiting response... 200 OK > Length: 63354880 (60M) [application/octet-stream] Saving to: 'stx-centos-stable-wheels.tar' > > stx-centos-stable-wheels.tar 100%[===========================================================================>] 60.42M 7.41MB/s in 14s > > 2019-07-22 02:57:34 (4.42 MB/s) - 'stx-centos-stable-wheels.tar' saved [63354880/63354880] > > ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/modules/stx-nova ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images > /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'long_description_content_type' > warnings.warn(msg) > /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'project_urls' > warnings.warn(msg) > sed: -e expression #1, char 20: unterminated `s' command ~/starlingx/workspace/localdisk/loadbuild/zhipengl/starlingx/std/update-images/unnamed-update/stx-nova_master-centos-stable-20190715T233000Z.1/pip-packages/wheels ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images > ~/starlingx/cgcs-root/build-tools/build-docker-images ~/starlingx/cgcs-root/build-tools/build-docker-images > Running: docker image pull starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 > master-centos-stable-20190715T233000Z.0: Pulling from starlingx/stx-nova > 5ad559c5ae16: Already exists > 3d7ee0b84e39: Pull complete > e1047bfd73cf: Pull complete > 16f151ad544f: Pull complete > 844126c15d7e: Pull complete > 869460797821: Pull complete > 0c319da8f09d: Pull complete > 2e884702ad07: Pull complete > Digest: sha256:4032358f3ab208e76c2737854fdcf4f7bd582ef8f91e150193ded8feb662fdbe > Status: Downloaded newer image for starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 > + bash -x /image-update/internal-update-stx-image.sh > + UPDATES_DIR=/image-update > + PIP_PACKAGES_DIR=/image-update/pip-packages > + DIST_PACKAGES_DIR=/image-update/dist-packages > + CUSTOMIZATION_SCRIPT=/image-update/customize.sh > ++ source /etc/os-release > +++ NAME='CentOS Linux' > +++ VERSION='7 (Core)' > +++ ID=centos > +++ ID_LIKE='rhel fedora' > +++ VERSION_ID=7 > +++ PRETTY_NAME='CentOS Linux 7 (Core)' > +++ ANSI_COLOR='0;31' > +++ CPE_NAME=cpe:/o:centos:centos:7 > +++ HOME_URL=https://www.centos.org/ > +++ BUG_REPORT_URL=https://bugs.centos.org/ > +++ CENTOS_MANTISBT_PROJECT=CentOS-7 > +++ CENTOS_MANTISBT_PROJECT_VERSION=7 > +++ REDHAT_SUPPORT_PRODUCT=centos > +++ REDHAT_SUPPORT_PRODUCT_VERSION=7 > ++ echo CentOS Linux > + OS_NAME='CentOS Linux' > ++ getopt -o h -l help: -- > + OPTS=' --' > + '[' 0 -ne 0 ']' > + eval set -- ' --' > ++ set -- -- > + true > + case $1 in > + shift > + break > + install_dist_packages > + local -i file_count=0 > ++ find /image-update/dist-packages -type f wc -l > + file_count=0 > + '[' 0 -eq 0 ']' > + return 0 > + install_pip_packages > + local modules > + local wheels > ++ find /image-update/pip-packages/modules/stx-nova -maxdepth 0 -type d > + modules=/image-update/pip-packages/modules/stx-nova > ++ find /image-update/pip-packages/wheels/ -type f -name '*.whl' > + wheels= > + '[' -z /image-update/pip-packages/modules/stx-nova -a -z '' ']' > + pip install -vvv --no-deps --no-index --pre --no-cache-dir > + --only-binary :all: --no-compile --force-reinstall > + /image-update/pip-packages/modules/stx-nova > DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. > Ignoring indexes: https://pypi.org/simple Created temporary directory: /tmp/pip-ephem-wheel-cache-Pld6sg Created temporary directory: /tmp/pip-req-tracker-5OQGeX Created requirements tracker '/tmp/pip-req-tracker-5OQGeX' > Created temporary directory: /tmp/pip-install-8tWna2 Processing /image-update/pip-packages/modules/stx-nova > Created temporary directory: /tmp/pip-req-build-EngJ2Q > Added file:///image-update/pip-packages/modules/stx-nova to build tracker '/tmp/pip-req-tracker-5OQGeX' > Running setup.py (path:/tmp/pip-req-build-EngJ2Q/setup.py) egg_info for package from file:///image-update/pip-packages/modules/stx-nova > Running command python setup.py egg_info > ERROR:root:Error parsing > Traceback (most recent call last): > File "/var/lib/openstack/lib/python2.7/site-packages/pbr/core.py", line 96, in pbr > attrs = util.cfg_to_args(path, dist.script_args) > File "/var/lib/openstack/lib/python2.7/site-packages/pbr/util.py", line 256, in cfg_to_args > pbr.hooks.setup_hook(config) > File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/__init__.py", line 25, in setup_hook > metadata_config.run() > File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/base.py", line 27, in run > self.hook() > File "/var/lib/openstack/lib/python2.7/site-packages/pbr/hooks/metadata.py", line 26, in hook > self.config['name'], self.config.get('version', None)) > File "/var/lib/openstack/lib/python2.7/site-packages/pbr/packaging.py", line 849, in get_version > name=package_name)) > Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. > error in setup command: Error parsing /tmp/pip-req-build-EngJ2Q/setup.cfg: Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name nova was given, but was not able to be found. > Cleaning up... > Removing source in /tmp/pip-req-build-EngJ2Q Removed file:///image-update/pip-packages/modules/stx-nova from build tracker '/tmp/pip-req-tracker-5OQGeX' > Removed build tracker '/tmp/pip-req-tracker-5OQGeX' > ERROR: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ Exception information: > Traceback (most recent call last): > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 178, in main > status = self.run(options, args) > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 352, in run > resolver.resolve(requirement_set) > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 131, in resolve > self._resolve_one(requirement_set, req) > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 294, in _resolve_one > abstract_dist = self._get_abstract_dist_for(req_to_install) > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/resolve.py", line 242, in _get_abstract_dist_for > self.require_hashes > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 362, in prepare_linked_requirement > abstract_dist.prep_for_dist(finder, self.build_isolation) > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/operations/prepare.py", line 171, in prep_for_dist > self.req.prepare_metadata() > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 537, in prepare_metadata > self.run_egg_info() > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/req/req_install.py", line 615, in run_egg_info > command_desc='python setup.py egg_info') > File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/utils/misc.py", line 776, in call_subprocess > % (command_desc, proc.returncode, cwd)) > InstallationError: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-EngJ2Q/ > + '[' 1 -ne 0 ']' > + echo 'Failed pip install' > Failed pip install > + exit 1 > Failed to update image: starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 > > > Zhipeng > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: 2019年7月20日 2:04 > To: Dean Troyer ; starlingx-discuss at lists.starlingx.io > Cc: zhu.boxiang at 99cloud.net > Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX > > For updating an image for testing, take a look at: > https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Incremental_Image_Updates > > As Dean notes, clone the repo and cherry-pick the commit, and then do something like: > > time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ > --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ > --module-src ${path_to_cloned_repo}/stx-nova > > > ie: > > # clone stx-nova, get stx/stein.2 > cd /localdisk/loadbuild/dpenney/ > mkdir nova-update > cd nova-update/ > git clone https://github.com/starlingx-staging/stx-nova.git > cd stx-nova/ > git fetch https://github.com/starlingx-staging/stx-nova.git stx/stein.2 git checkout FETCH_HEAD > > # cherry-pick update > git fetch https://review.opendev.org/openstack/nova refs/changes/69/651969/13 && git cherry-pick FETCH_HEAD > > # Fix up conflicts, etc > > # Build updated image, from 20190715T233000Z build as base, # specifying cloned/modified repo time bash -x ${MY_REPO}/build-tools/build-docker-images/update-stx-image.sh \ > --from starlingx/stx-nova:master-centos-stable-20190715T233000Z.0 \ > --module-src /localdisk/loadbuild/dpenney/nova-update/stx-nova \ > --user dpenney > > > This produces an updated image in the local registry: > Updated image: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > > If you also set a registry with --registry and use the --push option, the command will push the updated image to that registry. > > This also produces an image record file: > $ cat ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst > dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > > > Which you can pass as an argument to build-helm-charts.sh when building your application tarball for testing: > > build-helm-charts.sh \ > --image-record http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/docker-images/images-centos-stable-versioned.lst \ > --image-record ${MY_WORKSPACE}/std/update-images/unnamed-update/image-updates.lst \ > --label centos-stable-versioned > > > > If you look at the yaml file in the tarball, you can see that it now references the updated image: > > $ tar xzf ${MY_WORKSPACE}/std/build-helm/stx/stx-openstack-1.0-17-centos-stable-versioned.tgz -O ./stx-openstack.yaml | grep stx-nova: > nova_api: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_cell_setup: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_compute: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_compute_ironic: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_compute_ssh: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_conductor: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_consoleauth: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_db_sync: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_novncproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_placement: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_scheduler: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_spiceproxy: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > nova_spiceproxy_assets: dpenney/stx-nova:master-centos-stable-20190715T233000Z.1 > > > > > > -----Original Message----- > From: Dean Troyer [mailto:dtroyer at gmail.com] > Sent: Friday, July 19, 2019 1:08 PM > To: starlingx-discuss at lists.starlingx.io > Cc: zhu.boxiang at 99cloud.net > Subject: Re: [Starlingx-discuss] about cherry-pick a patch from OpenStack upstream for testing issue in StarlingX > > On Fri, Jul 19, 2019 at 10:57 AM Yong Hu wrote: > > For LP[0], there is a patch (https://review.opendev.org/#/c/651969/) > > in Nova upstream, what's the method/process for StarlingX to > > cherry-pick it for testing a LP reported in StarlingX? > > > > [0]:https://bugs.launchpad.net/starlingx/+bug/1820882 > > At a high level you would clone the stx-nova repo stx/stein.2 branch [0] and cherry-pick/backport the commit you want to test, and rebuild the Nova docker image and test that. > > I do have some Zuul jobs in starlingx/tis-repo that will pull that Nova branch and run the unit, functional and pep8 tox jobs to test in OpenStack CI. > > Given that the stx/stein branches also carry the NUMA live migration patches you would probably want to do whatever other live migration testing would be done upstream to validate this in the StarlingX context. > > dt > > [0] https://github.com/starlingx-staging/stx-nova/tree/stx/stein.2 > > -- > Dean Troyer > dtroyer at gmail.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Jerry.Sun at windriver.com Thu Jul 25 18:55:49 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Thu, 25 Jul 2019 18:55:49 +0000 Subject: [Starlingx-discuss] Code is not being merged into opendev Message-ID: Hi All, I just noticed that none of the code being merged today is showing up in opendev, and as a result I am not able to pull any changes down. For example, https://review.opendev.org/#/c/671561/ is merged but does not show up in https://opendev.org/starlingx/ansible-playbooks/commits/branch/master We have checked stx-config as well and it seems like things were working yesterday, and got broken today. This does not seem to be a problem with a specific repo and seems to affect all repos. Does anyone know what is happening? Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jul 25 19:26:16 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 25 Jul 2019 19:26:16 +0000 Subject: [Starlingx-discuss] Code is not being merged into opendev In-Reply-To: References: Message-ID: <20190725192616.iwpy3hhdvyypl5tm@yuggoth.org> On 2019-07-25 18:55:49 +0000 (+0000), Sun, Yicheng (Jerry) wrote: > I just noticed that none of the code being merged today is showing up in opendev, and as a result I am not able to pull any changes down. > > For example, https://review.opendev.org/#/c/671561/ is merged but does not show up in https://opendev.org/starlingx/ansible-playbooks/commits/branch/master > > We have checked stx-config as well and it seems like things were working yesterday, and got broken today. This does not seem to be a problem with a specific repo and seems to affect all repos. Does anyone know what is happening? There was a storage-related problem in the provider hosting OpenDev's Git servers early UTC today. We noticed that some Git refs which were updated in Gerrit during that time did not reliably replicate, so at 13:40 UTC I initiated a full re-replication of all OpenDev's Git repositories to make sure they're all intact. This has a side effect of delaying further refs from replicating until the backlog clears. At this moment we are 72% complete, which means you should expect any refs created in Gerrit after 13:40 UTC today to appear there by roughly 21:40 UTC. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From maria.g.perez.ibarra at intel.com Thu Jul 25 22:17:51 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 25 Jul 2019 22:17:51 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 7/25/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190718T013000Z" from (link) ---------------------------------------------------------------------- MANUAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 493 Pass = 323 Fail = 15 Blocked = 37 Not Run = 97 Obsolete = 37 Total executed = 375 Pass Rate = 95.56% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Regression - AIO-SX 25 PASS | 1 Obsolete Regression - Backup & Restore Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM 3 PASS Regression - HA 11 PASS | 1 FAIL Regression - Heat 12 PASS | 1 OBSOLETE Regression - Horizon 4 PASS Regression - Install and Config 5 PASS Regression - Maintenance 8 PASS | 1 FAIL Regression - Networking 106 PASS | 3 FAIL | 19 BLOCKED | 15 OBSOLETE Regression - Nova 12 PASS | 3 FAIL Regression - Security 34 PASS | 1 FAIL | 6 BLOCKED | 1 OBSOLETE Regression - Storage 15 PASS |1 FAIL| 2 OBSOLETE Regression - Inventory 29 PASS | 1 FAIL System Test 20 PASS | 1 FAIL | 12 BLOCKED | 1 OBSOLETE Regression - new features 24 PASS | 3 FAIL --------------------------------------------------------------------------- AUTOMATED EXECUTION - INTEL --------------------------------------------------------------------------- Overall Results: Total = 235 Pass = 139 Fail = 75 Not Run = 21 Total executed = 214 Pass Rate = 64.95% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Fault-Management 14 PASS | 1 FAIL Gnocchi 12 PASS HEAT 6 PASS High-Availability 6 PASS | 4 FAIL Horizon 2 PASS Insallation-And-Config 2 PASS | 1 FAIL Maintenance 19 PASS | 7 FAIL Networking 33 PASS | 17 FAIL Nova 7 PASS | 12 FAIL Security 12 PASS | 11 FAIL Storage 1 PASS | 13 FAIL SYSINVENTORY 23 PASS | 5 FAIL System 2 PASS |4 FAIL ---------------------------------------------------------------------- AUTOMATED EXECUTION - Wind River ---------------------------------------------------------------------- Overall Results: Pass = 609 Fail = 139 Total executed = 748 Pass Rate = 81.4% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Horizon 46 4 PASS MTC General 33 PASS | 31 FAIL Networking 22 PASS | 11 FAIL Nova 184 PASS | 62 FAIL REST API 220 PASS | 4 FAIL Security 28 PASS | 14 FAIL Storage 59 PASS | 10 FAIL Sysinv 17 PASS | 3 FAIL ------------------------------------------------- Bugs: Controller can't unlock after lock on AIO-SX https://bugs.launchpad.net/starlingx/+bug/1833472 user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 System account doesn't block after invalid login attempts https://bugs.launchpad.net/starlingx/+bug/1814345 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 virsh only listing one volume, even though there was an additional volume attached after instantiation https://bugs.launchpad.net/starlingx/+bug/1834194 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 VM fail to live migrate after evacuation https://bugs.launchpad.net/starlingx/+bug/1836402 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 CirrOS VM login takes too much time, and throw different log errors https://bugs.launchpad.net/starlingx/+bug/1835575 Live Migration Error: Failed to live migrate instance to host "AUTO_SCHEDULE". https://bugs.launchpad.net/starlingx/+bug/1837256 stx-openstack in apply-failed after lock/unlock standby controller https://bugs.launchpad.net/starlingx/+bug/1837581 403 error in horizon log when try to update the flavor metadata (and admin user is logged out) https://bugs.launchpad.net/starlingx/+bug/1821213 Create Volume dialog opens (from image panel in Horizon) but getting error default volume type can not be found https://bugs.launchpad.net/starlingx/+bug/1826259 instance creating via horizon failed https://bugs.launchpad.net/starlingx/+bug/1829925 Containers: openstack pods failed after force rebooting active controller https://bugs.launchpad.net/starlingx/+bug/1816842 After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" https://bugs.launchpad.net/starlingx/+bug/1836928 nova instance remnant left behind after cold migration completes https://bugs.launchpad.net/starlingx/+bug/1824858 disk_available_least value updates when instance moved but not to the value expected https://bugs.launchpad.net/nova/+bug/1834527 Containers: vm unreachable for minutes after live migration or vm reboot https://bugs.launchpad.net/starlingx/+bug/1818118 100.114 NTP alarm not cleared after swact https://bugs.launchpad.net/starlingx/+bug/1834071 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 after changing a setting of panko stx-openstack failed to reach 'applied' status after 1800 seconds https://bugs.launchpad.net/starlingx/+bug/1828056 Total Bugs: 28 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?pli=1#gid=322455033 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Jul 25 22:56:33 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 25 Jul 2019 22:56:33 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190725 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-25 (link) Status: Yellow =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] 1 TCs FAIL Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== AIO - Simplex reboots during application-apply, [this issue is Intermittent] : https://bugs.launchpad.net/starlingx/+bug/1837936 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Jul 26 17:05:27 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 26 Jul 2019 17:05:27 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 07/25 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1572A8A@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Bugs - stx.2.0 gating High: 6 - https://bugs.launchpad.net/starlingx/+bug/1817936 - Austin, code review in progress, but there are concerns with the proposed fix >> Matt recommends we change this to Medium given the VIM recovers and there is no notable system impact. Frank put this as high due to concerns about stability. Leave for now and evaluate again a little later. - https://bugs.launchpad.net/starlingx/+bug/1835807 - Joseph, reproduced. Still investigating the cause. - https://bugs.launchpad.net/starlingx/+bug/1835965 - YaoLe, Confirmed by Paulina, this bug is an issue with steps. So this bug marked as invalid. >> Recommend to update the wiki with the steps - https://bugs.launchpad.net/starlingx/+bug/1836252 - Joseph, unsure if each occurrence is the same issue. Need a complete set of logs to investigate each occurence. >> Logs requested from the reporter - https://bugs.launchpad.net/starlingx/+bug/1836682 - Chenjie, steps are wrong and pci alias is not configured. Need submit a patch to override the pci alias in nova.conf. >> There is code already that configures the pci aliases. Matt to addd reference in LP and Chenjie to investigate further why this code is not working. - https://bugs.launchpad.net/starlingx/+bug/1836969 - Unassigned, Matt working on proposal, then need to assign to a developer >> On the fence whether this is high or medium. AIO duplex-direct is not functional w/ IPv6. Medium: 9 - https://bugs.launchpad.net/starlingx/+bug/1817593 - Teresa, not being worked. Can potentially defer to stx.3.0 given this is for a specific config; can be worked around by explicitly defining the cluster network - https://bugs.launchpad.net/starlingx/+bug/1818118 - Joseph, not reproduced yet >> There are other reports that live migration is not working at all - https://bugs.launchpad.net/starlingx/+bug/1821026 - fpxie >> looks to be actively being worked by the assignee. Seems consistent with config-drive. May need to consider changing this to high. - https://bugs.launchpad.net/starlingx/+bug/1822396 - Joseph, reproduced and understood. stx-openstack application not reapplied (and new overrides not generated) after unlock if still in applying state from previous unlock. >> This should be re-assigned to the containers team. There is currently work in progress to optimize the application apply. Ghada to re-assign - https://bugs.launchpad.net/starlingx/+bug/1832047 - Cheng, no one could reproduce the bug. The reporter had also only seen it happened once. >> New logs added. Cheng will review the logs. Matt to check if there was feedback from EricM on pmon behavior - https://bugs.launchpad.net/starlingx/+bug/1832892 - Steve, on vacation, will investigate in August - https://bugs.launchpad.net/starlingx/+bug/1834234 - Unassigned, Need to assign. Can potentially defer to stx.3.0 and note this config as a limitation for IPv6. Workaround is to use a vlan for the cluster network. >> Agreed to move to stx.3.0 - https://bugs.launchpad.net/starlingx/+bug/1834556 - Marvin, reproduced. Still investigating the cause. >> Is this a duplicate of https://bugs.launchpad.net/starlingx/+bug/1832697 which was recently fixed? - https://bugs.launchpad.net/starlingx/+bug/1836972 - Steve, on vacation, will investigate in August Low: 1 - https://bugs.launchpad.net/starlingx/+bug/1830082 - Teresa, lower priority, not being worked. Can potentially defer to stx.3.0 given a configuration step was missed in this scenario. Undecided / New: 3 - https://bugs.launchpad.net/starlingx/+bug/1830286 - Elio >> Still believe this is a configuration issue, so expected to close as Invalid. Elio will send a further update this week. - https://bugs.launchpad.net/starlingx/+bug/1835575 - New >> Assigned; will start investigation/reproduction - https://bugs.launchpad.net/starlingx/+bug/1837759 - New >> Re-assigned to the distro.openstack team stx.2.0 Networking Test Status - Networking Regression - Tracker: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 - Still working on TCs related to sriov & pci-pt - Not running NUMA mesh TCs that nobody knows what the steps are and what they are intended for - Not able to run any QAT device passthru TCs due to lack of hardware. Ghada believes Ada ordered some and they were used in testing the QAT device upversion. Elio to follow-up stx.3.0 - OVS-DPDK Containerization - Prime: Cheng - Spec is under review - comments received from Matt - TSN - Prime: Huifeng - Spec: https://review.opendev.org/#/c/666768/ merged on July 22 - Not expecting code changes; likely integration/validation effort only. Matt to send questions related to proposed test head/application to be used for the validation From Ghada.Khalil at windriver.com Fri Jul 26 17:54:26 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 26 Jul 2019 17:54:26 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - July 25/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1573B43@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release meeting agenda / notes July 25 2019 stx.2.0 - Feature Testing Status - Tracker: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1717644237 - Will still keep this open until container testing for helm overrides & ironic is complete (12 TCs). Ada to provide forecast for completion. - Regression Testing Status - Tracker: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 - Ada & Numan are meeting to refresh the status in more detail. - Current plan is to finish regression execution by August 2. - Automated Regression - The automated regression results will be incorporated in the regular regression emails starting this week. - The Robot TCs are being discontinued, so Intel team will not put effort into fixing any TC bugs. So only Robot TCs that are known to be accurate (no TC issues) will continue to be executed. This is a reduction of about 50%. - The focus will be on maintaining the pyTest TCs which is the chosen platform for automated testing. Any TC issues found there will be fixed. - Bugs - As discussed in the community meeting, the proposal for bug priorities are as follows: - Critical: must fix by release date (Release Build: Aug 23) - High: must be fixed for the stx.2.0 release, but could be fixed *after* the release date (in a maintenance release), fixes will be backported to stx.2.0 - Medium: continue working until release date - fix as many as possible - defer to 3.0 after release date, fixes will be backported between RC1 and the release date, but will not be backported to stx.2.0 after the release date - Low: optional, will be deferred to stx.3.0 at RC1, fixes will not be backported to stx.2.0 - PLs are reviewing their bug priorities to align to the agreed definitions above: - Networking: Done - Containers: Done - Storage: Follow-up w/ Cindy & Frank - Distro.Openstack: In Progress / expected to be done by July 30 - Distro.Other: Follow-up w/ Cindy - Config: Follow-up w/ Dariush - Branch Creation / Logistics - Still on track to create the branch the week of August 5. Ghada to follow-up with Scott & Dean on any logistics prep. - Release Notes - Need to figure out what is needed. Need a prime to help coordinate. - Given the scope of the changes in stx.2.0, it makes sense to provide a list of high level features/capabilities in the release (vs detailed release notes). stx.3.0 Milestone-1 declared last week. From Jerry.Sun at windriver.com Fri Jul 26 19:46:22 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Fri, 26 Jul 2019 19:46:22 +0000 Subject: [Starlingx-discuss] [docs] OpenID connect Message-ID: Hi Docs Team, For story 2006235, I need the following changes to docs: OpenID connect can now be set up on the kubernetes cluster as a form of authentiation. This is done by specifying these values during ansible bootstrap: apiserver_oidc: client_id: ... issuer_url: ... username_claim: ... All the values must be specified or all of them must be missing; not specifying any would mean the feature is not configured. The options match the options with the same name with "oidc_" when starting kube-apiserver. Please refer to official Kubernetes documentation on how to use these values: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ Currently, the options only sets up Kubernetes to support OpenID connect. Authentication by OpenID connect is not forced if these options are used during Ansible bootstrap. The controller can still run Kubernetes commands without OpenID connect auth. If you want to use OpenID connect auth, you will need to configure a (remote) Kubernetes client to use OpenID connect auth as well. Please note that this will be targeted for R3, so please do not put this in the R2 docs. Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Fri Jul 26 20:45:59 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Fri, 26 Jul 2019 20:45:59 +0000 Subject: [Starlingx-discuss] [docs] OpenID connect In-Reply-To: References: Message-ID: <3808363B39586544A6839C76CF81445EA1B9C2E7@ORSMSX104.amr.corp.intel.com> Thanks Jerry. I’ve created a corresponding doc story with your notes embedded to queue up for R3. https://storyboard.openstack.org/#!/story/2006289 -- Mike From: Sun, Yicheng (Jerry) [mailto:Jerry.Sun at windriver.com] Sent: Friday, July 26, 2019 1:46 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [docs] OpenID connect Hi Docs Team, For story 2006235 (https://storyboard.openstack.org/#!/story/2006235), I need the following changes to docs: OpenID connect can now be set up on the kubernetes cluster as a form of authentiation. This is done by specifying these values during ansible bootstrap: apiserver_oidc: client_id: … issuer_url: … username_claim: … All the values must be specified or all of them must be missing; not specifying any would mean the feature is not configured. The options match the options with the same name with "oidc_" when starting kube-apiserver. Please refer to official Kubernetes documentation on how to use these values: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ Currently, the options only sets up Kubernetes to support OpenID connect. Authentication by OpenID connect is not forced if these options are used during Ansible bootstrap. The controller can still run Kubernetes commands without OpenID connect auth. If you want to use OpenID connect auth, you will need to configure a (remote) Kubernetes client to use OpenID connect auth as well. Please note that this will be targeted for R3, so please do not put this in the R2 docs. Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Fri Jul 26 23:10:35 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 26 Jul 2019 23:10:35 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190726 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-26 (link) Status: Yellow =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] 1 TCs FAIL Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== AIO - Simplex reboots during application-apply, [this issue is Intermittent] : https://bugs.launchpad.net/starlingx/+bug/1837936 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hu.tianhao at 99cloud.net Mon Jul 29 06:40:12 2019 From: hu.tianhao at 99cloud.net (=?UTF-8?B?6IOh5aSp5piK?=) Date: Mon, 29 Jul 2019 14:40:12 +0800 (GMT+08:00) Subject: [Starlingx-discuss] =?utf-8?q?=5BStarlingx-disscuss=5D=5BBuild=5D?= =?utf-8?q?Fail_to_build_rpm_packages_when_build_StarlingX_ISO?= Message-ID: Hi guys, Recently I got a problem when I try to build StarlingX ISO. When I run the 'build-pkgs' command following the 'stx.2019.05 Build guide ', I can't build rpm packages successfully. Followings are errors in build.log. ENTER ['do_with_status'](['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target x86_64 --nodeps /builddir/build/SPECS/sm-common.spec'], chrootPath='/localdisk/loadbuild/test/starlingx/std/mock/b0/root'env={'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8', 'BUILD_BY': 'test', 'BUILD_DATE': '2019-07-29 04:02:46 +0000', 'REPO': '/localdisk/designer/test/starlingx/cgcs-root', 'WRS_GIT_BRANCH': 'HEAD', 'CGCS_GIT_BRANCH': 'HEAD'}shell=Falselogger=timeout=0uid=1001gid=751user='mockbuild'nspawn_args=[]unshare_net=TrueprintOutput=False) Executing command: ['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target x86_64 --nodeps /builddir/build/SPECS/sm-common.spec'] with env {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8', 'BUILD_BY': 'test', 'BUILD_DATE': '2019-07-29 04:02:46 +0000', 'REPO': '/localdisk/designer/test/starlingx/cgcs-root', 'WRS_GIT_BRANCH': 'HEAD', 'CGCS_GIT_BRANCH': 'HEAD'} and shell False BUILDSTDERR: /etc/profile: line 45: /dev/null: Permission denied BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied BUILDSTDERR: warning: Macro expanded in comment on line 26: %_unitdir I think BUILDSTDERR: warning: Macro expanded in comment on line 111: %{_unitdir}/* BUILDSTDERR: warning: Macro expanded in comment on line 112: %{_bindir}/* BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 500: /dev/null: Permission denied BUILDSTDERR: *** ERROR: DWARF compression requested, but no dwz installed BUILDSTDERR: error: Bad exit status from /var/tmp/rpm-tmp.VbkqZP (%install) BUILDSTDERR: Macro expanded in comment on line 26: %_unitdir I think BUILDSTDERR: Macro expanded in comment on line 111: %{_unitdir}/* BUILDSTDERR: Macro expanded in comment on line 112: %{_bindir}/* BUILDSTDERR: Bad exit status from /var/tmp/rpm-tmp.VbkqZP (%install) RPM build errors: Child return code was: 1 EXCEPTION: [Error()] Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 96, in trace result = func(*args, **kw) File "/usr/lib/python3.6/site-packages/mockbuild/util.py", line 736, in do_with_status raise exception.Error("Command failed: \n # %s\n%s" % (command, output), child.returncode) mockbuild.exception.Error: Command failed: # bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/sm-common.spec But if I run the 'sudo mock -r test-starlingx-tis-r5-pike-std.cfg sm-common-1.0.0-20.tis.src.rpm' command, rpm packages can be built successfully. I think this is probably a permission problem. But even I change owner and group of these files, the 'build-pkgs' command still failed for same reason. I really can't understand this problem,can anybody give me some comments for it? Thanks Tianhao -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Mon Jul 29 08:45:58 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Mon, 29 Jul 2019 16:45:58 +0800 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? Message-ID: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> Hi all, Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. Thanks Kunpeng From zhang.kunpeng at 99cloud.net Mon Jul 29 10:17:29 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Mon, 29 Jul 2019 18:17:29 +0800 Subject: [Starlingx-discuss] [starlingx-discuss]Where are the nova logs of stx2.0 ? Message-ID: Hi all, Where are the openstack services’ logs, such as "nova-compute.log/nova-scheduler.log”? I cannot find them in stx2.0. Thanks Kunpeng From Brent.Rowsell at windriver.com Mon Jul 29 11:01:03 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 29 Jul 2019 11:01:03 +0000 Subject: [Starlingx-discuss] [starlingx-discuss]Where are the nova logs of stx2.0 ? In-Reply-To: References: Message-ID: <221089DA-1908-4908-B98F-88D57992FA6C@windriver.com> Look in /var/log/containers Brent Sent from my iPhone > On Jul 29, 2019, at 7:18 AM, 张鲲鹏 wrote: > > Hi all, > > Where are the openstack services’ logs, such as "nova-compute.log/nova-scheduler.log”? > I cannot find them in stx2.0. > > Thanks > Kunpeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Mon Jul 29 12:13:32 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 29 Jul 2019 05:13:32 -0700 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> Message-ID: Hi Kunpeng In the MultiOS subproject, we are performing such experiments, at the moment with Open SUSE. We are facing some technical problems with the runtime dependencies and the way the services start. Can you please describe the steps you are following and what kind of specific problems do you have? Thanks Victor Rodriguez On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: > > Hi all, > > Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. > > Thanks > Kunpeng > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Mon Jul 29 13:12:36 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 29 Jul 2019 13:12:36 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Weekly Meeting July 29 Message-ID: Team Agenda for July 29 meeting: 1. SB status: Mingyuan's final commit for 2004760 (ironic) 2. stx2.0 gating bugs: 36 (up 3 from one week ago) 15 High & 21 Medium Focus on High bugs - request updates and your plan to address by RC1/stx.2.0 release date: #1833323 Openstack manifest apply hung applying cinder manifest [Tee Ngo] #1817936 Periodic message loss seen between VIM and OpenStack REST APIs [Austin Sun] #1833096 Instances crash on each system application-update operation (and CPU exceeds thresholds) [Al Bailey] #1833746 Some helm charts can override other helm charts (eg: Ironic & OVS) [Bob Church] #1834685 Kubernetes cluster certificate rotation [David Sullivan] #1834796 AIO: Too many rabbit threads [Bin Yang] #1834799 AIO: Too many ngnix worker threads [Bin Yang] #1835567 Remove all references to /etc/nova/instances [Gerry Kopec] #1836239 timeout deploying openstack-cinder chart if performed system storage-tier-modify operation [Daniel Badea] #1836378 stx-openstack application stuck at applying status by processing chart: osh-kube-system-ingress [Lin Shuicheng] #1836609 application apply fails after compute lock and unlock [Stefan Dinescu] #1837055 reapplying stx-openstack application failed on swacted host [Matt Peters] #1837426 Very high platform CPU usage on AIO-DX active controller with stx-openstack installed [Al Bailey] #1837792 stx-openstack application apply aborted [Angie Wang] Also looking for an update on a subset of the Medium's: #1820902 Nova/Neutron daemonset pods restarted on all workers when new worker is added [Ovidu Poncea] #1817958 Containers: Worker nodes are pulling from external registry instead from the internal registry [Erich Cordoba] #1824881 Unlock after force lock enabled the worker according to maintenance but hypervisor remained down [Erich Cordoba] #1837769 stx-openstack application-applying stuck at osh-openstack-placement [Angie Wang] 3. No meeting on Aug 5 due to holiday. Next meeting to be Aug 12. Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4461 bytes Desc: not available URL: From erich.cordoba.malibran at intel.com Mon Jul 29 17:04:30 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Mon, 29 Jul 2019 17:04:30 +0000 Subject: [Starlingx-discuss] [Starlingx-disscuss][Build]Fail to build rpm packages when build StarlingX ISO In-Reply-To: References: Message-ID: <236333bb2fd78e964ca1cddffa1ad4baa2ccc592.camel@intel.com> Hi, It seems that your mock environment is broken, see this error: > BUILDSTDERR: *** ERROR: DWARF compression requested, but no dwz > installed The command that you tried generates a new mock environment and that is why it builds successfully. If you want to reproduce the issue in the mock environment that is used by the build system, then use the following config file: $ mock -r $MY_WORKSPACE/std/configs/--tis-r5-pike-std/--tis-pike-std.b0.cfg And try to find why dwz cannot be installed. Alternatively you can just clean the mock environment with the ` --clean` flag and try to build again. I hope this can help. -Erich On Mon, 2019-07-29 at 14:40 +0800, 胡天昊 wrote: > Hi guys, > > Recently I got a problem when I try to build StarlingX ISO. When I > run the 'build-pkgs' command following the 'stx.2019.05 Build > guide ', I can't build rpm packages successfully. > Followings are errors in build.log. > > ENTER ['do_with_status'](['bash', '--login', '-c', '/usr/bin/rpmbuild > -bs --target x86_64 --nodeps /builddir/build/SPECS/sm-common.spec'], > chrootPath='/localdisk/loadbuild/test/starlingx/std/mock/b0/root'env= > {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', > 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', > 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': > ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8', 'BUILD_BY': > 'test', 'BUILD_DATE': '2019-07-29 04:02:46 +0000', 'REPO': > '/localdisk/designer/test/starlingx/cgcs-root', 'WRS_GIT_BRANCH': > 'HEAD', 'CGCS_GIT_BRANCH': > 'HEAD'}shell=Falselogger= 0x7fa58100acf8>timeout=0uid=1001gid=751user='mockbuild'nspawn_args=[] > unshare_net=TrueprintOutput=False) > Executing command: ['bash', '--login', '-c', '/usr/bin/rpmbuild -bs > --target x86_64 --nodeps /builddir/build/SPECS/sm-common.spec'] with > env {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', > 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', > 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': > ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8', 'BUILD_BY': > 'test', 'BUILD_DATE': '2019-07-29 04:02:46 +0000', 'REPO': > '/localdisk/designer/test/starlingx/cgcs-root', 'WRS_GIT_BRANCH': > 'HEAD', 'CGCS_GIT_BRANCH': 'HEAD'} and shell False > BUILDSTDERR: /etc/profile: line 45: /dev/null: Permission denied > BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied > BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied > BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied > BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied > BUILDSTDERR: warning: Macro expanded in comment on line 26: %_unitdir > I think > BUILDSTDERR: warning: Macro expanded in comment on line 111: > %{_unitdir}/* > BUILDSTDERR: warning: Macro expanded in comment on line 112: > %{_bindir}/* > > BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: > Permission denied > BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: > Permission denied > BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: > Permission denied > BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: > Permission denied > BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: > Permission denied > BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 500: /dev/null: > Permission denied > BUILDSTDERR: *** ERROR: DWARF compression requested, but no dwz > installed > BUILDSTDERR: error: Bad exit status from /var/tmp/rpm-tmp.VbkqZP > (%install) > BUILDSTDERR: Macro expanded in comment on line 26: %_unitdir I > think > BUILDSTDERR: Macro expanded in comment on line 111: %{_unitdir}/* > BUILDSTDERR: Macro expanded in comment on line 112: %{_bindir}/* > BUILDSTDERR: Bad exit status from /var/tmp/rpm-tmp.VbkqZP > (%install) > RPM build errors: > Child return code was: 1 > EXCEPTION: [Error()] > Traceback (most recent call last): > File "/usr/lib/python3.6/site- > packages/mockbuild/trace_decorator.py", line 96, in trace > result = func(*args, **kw) > File "/usr/lib/python3.6/site-packages/mockbuild/util.py", line > 736, in do_with_status > raise exception.Error("Command failed: \n # %s\n%s" % (command, > output), child.returncode) > mockbuild.exception.Error: Command failed: > # bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps > /builddir/build/SPECS/sm-common.spec > > > But if I run the 'sudo mock -r test-starlingx-tis-r5-pike-std.cfg sm- > common-1.0.0-20.tis.src.rpm' command, rpm packages can be built > successfully. > I think this is probably a permission problem. But even I change > owner and group of these files, the 'build-pkgs' command still failed > for same reason. > I really can't understand this problem,can anybody give me some > comments for it? > > Thanks > Tianhao > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yong.hu at intel.com Mon Jul 29 17:31:03 2019 From: yong.hu at intel.com (Yong Hu) Date: Mon, 29 Jul 2019 10:31:03 -0700 Subject: [Starlingx-discuss] agenda stx.distro.openstack project meeting WW31.2 Message-ID: <9131023d-d99c-944d-817c-82933704f0ea@intel.com> Here will be the agenda for StarlingX OpenStack sub-project meeting in WW31.2. Feel free to add if any other topics. -------------------------------------------------------------------- 1. review stx.2.0 high LPs - majority issues are related to VM live-migration - *LP owners to update* 2. update patches in upstream: - Nova placement helm chart https://review.opendev.org/#/c/662229/ - Orphan instance cleanup: https://review.openstack.org/#/c/627765/ https://review.opendev.org/#/c/670790/ - NUMA topology: https://review.openstack.org/#/c/621476/ - Nova Upstream Activities from 99Cloud: VCPU mode selection: in nova run-way train. 3. Open From ildiko.vancsa at gmail.com Mon Jul 29 19:42:04 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 29 Jul 2019 21:42:04 +0200 Subject: [Starlingx-discuss] StarlingX confirmation readiness review Message-ID: Hi StarlingX Community, I’m reaching out to you to give a heads up about the project’s confirmation readiness review this Thursday on the TSC call: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Technical_Steering_Committee_Call The project was brought under the OSF umbrella last summer and was officially launched during the fall last year. With that in mind it is time to review the project to see how it’s been progressing since its launch and how close it is to go through the confirmation process. You can find more information about the set of criteria for confirmation on this wiki: https://wiki.openstack.org/w/index.php?title=Governance/Foundation/OSFProjectConfirmationGuidelines This Thursday OSF staff members and the StarlingX TSC are planning to review the project with the guidelines on the above wiki in mind to see how close the project is, what we need to improve and what we should keep doing. I would also like to encourage everyone from the community who’s interested in participating in this discussion to join the call this week and share your views and feedback. Please let me know if you have any questions. Thanks and Best Regards, Ildikó From maria.g.perez.ibarra at intel.com Mon Jul 29 22:29:25 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 29 Jul 2019 22:29:25 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190729 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-29 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gerry.Kopec at windriver.com Tue Jul 30 01:06:56 2019 From: Gerry.Kopec at windriver.com (Kopec, Gerald (Gerry)) Date: Tue, 30 Jul 2019 01:06:56 +0000 Subject: [Starlingx-discuss] live migration and CPU compatibility Message-ID: <58CF5BABC9A76946A638A0E8AE48D173788BE64D@ALA-MBD.corp.ad.wrs.com> Hi folks, A couple weeks back, we merged a commit to our nova config in starlingx to default libvirt/cpu_mode option to host-model. https://review.opendev.org/#/c/669544/ This means that the guest model will closely match the host, see nova docs for more details: https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html#specify-the-cpu-model-of-kvm-guests However, as part of the live migration prechecks on bare metal environments, nova will compare source and destination hosts for CPU compatibility. If they are not compatible, then live migration will fail. Symptoms will be that the instance will not migrate and you'll see an exception in the nova-compute logs with reason: "Unacceptable CPU info: CPU doesn't have compatibility". Our expectation is that most customers would have homogeneous hardware and if not they would organize their hosts into live migratable groups via the nova host aggregates capability. Same principle applies to test environments so you may have to update lab configuration and automated tests. Gerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Tue Jul 30 01:58:03 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Tue, 30 Jul 2019 09:58:03 +0800 Subject: [Starlingx-discuss] [starlingx-discuss]Where are the nova logs of stx2.0 ? In-Reply-To: <221089DA-1908-4908-B98F-88D57992FA6C@windriver.com> References: <221089DA-1908-4908-B98F-88D57992FA6C@windriver.com> Message-ID: <7248EE03-5F4B-4C66-AC9A-0F005B85442B@99cloud.net> Thank you very much! Kunpeng > On Jul 29, 2019, at 19:01, Rowsell, Brent wrote: > > Look in /var/log/containers > > Brent > > Sent from my iPhone > >> On Jul 29, 2019, at 7:18 AM, 张鲲鹏 wrote: >> >> Hi all, >> >> Where are the openstack services’ logs, such as "nova-compute.log/nova-scheduler.log”? >> I cannot find them in stx2.0. >> >> Thanks >> Kunpeng >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhang.kunpeng at 99cloud.net Tue Jul 30 10:10:22 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Tue, 30 Jul 2019 18:10:22 +0800 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> Message-ID: Hi Victor, I haven’t started yet, because I don’t know how to do it. I'm glad you are performing this experiments, could you share your experience or documents? You know, booting from iso isn’t a well deployment in some cases. Recently, I studied the kickstart configs in bootimage.iso, and my idea is to operate the ready system according to the ks.cfg, but I’m not sure if it’s a right way. Thanks Kunpeng > On Jul 29, 2019, at 20:13, Victor Rodriguez wrote: > > Hi Kunpeng > > In the MultiOS subproject, we are performing such experiments, at the > moment with Open SUSE. We are facing some technical problems with the > runtime dependencies and the way the services start. Can you please > describe the steps you are following and what kind of specific > problems do you have? > > Thanks > > Victor Rodriguez > > > On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: >> >> Hi all, >> >> Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. >> >> Thanks >> Kunpeng >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Tue Jul 30 12:16:22 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 30 Jul 2019 14:16:22 +0200 Subject: [Starlingx-discuss] Edge Hacking Days Message-ID: Hi, I’m reaching out with an attempt from the edge computing group to organize hacking days to work on edge related tasks. The idea is to get together remotely on IRC/Zoom or any other platform that supports remote communication and work on items like building and testing our reference architectures or work on some project specific items like in Keystone or Ironic. Here are Doodle polls for the next three months: August: https://doodle.com/poll/ucfc9w7iewe6gdp4 September: https://doodle.com/poll/3cyqxzr9vd82pwtr October: https://doodle.com/poll/6nzziuihs65hwt7b Please mark any day when you have some availability to dedicate to hack even if it’s not a full day. Please let me know if you have any questions. As a reminder you can find the edge computing group’s resources and information about latest activities here: https://wiki.openstack.org/wiki/Edge_Computing_Group Thanks and Best Regards, Ildikó (IRC: ildikov) From cindy.xie at intel.com Tue Jul 30 12:48:17 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 30 Jul 2019 12:48:17 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F360019C4@SHSMSX104.ccr.corp.intel.com> Agenda: 1. Bug triage & review, re-prioritize LPs (Cindy/Saul/Brent) 2. Sanity test status for kernel minor upgrade (Shuai) 3. Ceph containerization plan review (Tingjie) 4. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; Wold, Saul; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; 'zhaos' Cc: 'Seiler, Glenn'; Hu, Wei W; Peng Tan; Gomez, Juan P; 'Waines, Greg'; 'Eslimi, Dariush'; Jones, Bruce E; 'Zhi Zhi2 Chang'; Chen, Tingjie; 'Badea, Daniel'; 'Chen, Jacky'; 'Komiyama, Takeo'; Armstrong, Robert H; 'Carlos Cebrian'; Cobbley, David A Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 31, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Jul 30 12:52:22 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 30 Jul 2019 12:52:22 +0000 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F36001A30@SHSMSX104.ccr.corp.intel.com> I think it's doable. @Yan, did you ever try to run the provision scripts on a ready CentOS bare-metal? Thx. - cindy -----Original Message----- From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 30, 2019 6:10 PM To: Victor Rodriguez Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? Hi Victor, I haven’t started yet, because I don’t know how to do it. I'm glad you are performing this experiments, could you share your experience or documents? You know, booting from iso isn’t a well deployment in some cases. Recently, I studied the kickstart configs in bootimage.iso, and my idea is to operate the ready system according to the ks.cfg, but I’m not sure if it’s a right way. Thanks Kunpeng > On Jul 29, 2019, at 20:13, Victor Rodriguez wrote: > > Hi Kunpeng > > In the MultiOS subproject, we are performing such experiments, at the > moment with Open SUSE. We are facing some technical problems with the > runtime dependencies and the way the services start. Can you please > describe the steps you are following and what kind of specific > problems do you have? > > Thanks > > Victor Rodriguez > > > On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: >> >> Hi all, >> >> Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. >> >> Thanks >> Kunpeng >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Tue Jul 30 13:28:42 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 30 Jul 2019 13:28:42 +0000 Subject: [Starlingx-discuss] Community Call (July 31, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ABCD03@ALA-MBD.corp.ad.wrs.com> Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - sanity - any red sanities since last Community meeting? - reviews in need of attention - defect trend / gating launchpads - stx.2.0 RC1 is next week - documentation update - bitergia update: see https://etherpad.openstack.org/p/stx-bitergia - open actions from previous meetings Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190731T1400 From Anirudh.Gupta at hsc.com Tue Jul 30 11:02:33 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Tue, 30 Jul 2019 11:02:33 +0000 Subject: [Starlingx-discuss] Config-file approach to run Config-controller Message-ID: Hi Team, I am using StarlingX 2018.10 Release and want to reduce the manual effort in running "config-controller". For this I found a way in which we can pass a Config-File as a parameter while running the "config-controller". localhost:~$ config_controller --help Usage: /usr/bin/config_controller Perform system configuration The default action is to perform the initial configuration for the system. The following options are also available: --config-file Perform configuration using INI file --backup Backup configuration using the given name --clone-iso Clone and create an image with the given file name --clone-status Status of the last installation of cloned image --restore-system Restore system configuration from backup file with the given name, full path required --restore-images Restore images from backup file with the given name, full path required --restore-complete Complete restore of controller-0--allow-ssh Allow configuration to be executed in ssh But I can't find any Sample config-file available in the documents. Also I found a bug https://bugs.launchpad.net/starlingx/+bug/1814833 which has issue in running the config_file. Can someone please tell if the method of passing config-file in supported in StarlingX 2018.10? If yes, can you please share a sample config-file? If the method is not supported, can someone please share a method to reduce the manual efforts and automate the inputs required in the config_controller? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tee.Ngo at windriver.com Tue Jul 30 19:11:16 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Tue, 30 Jul 2019 19:11:16 +0000 Subject: [Starlingx-discuss] FM-doc compile Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453A97605@ALA-MBD.corp.ad.wrs.com> Hi, If you pulled stx-fault repo this afternoon and hit an fm-doc build issue, either revert the following commit: commit b4c088c6a4215a8aea2da306c4bc7fe616f6bce6 Author: Tee Ngo Date: Fri Jul 26 17:53:13 2019 -0400 Define alarm group, type and ids for application or include this commit: https://review.opendev.org/#/c/673615/ Sorry for the inconvenience. The issue will be resolved shortly. Tee -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jul 30 19:19:27 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 30 Jul 2019 21:19:27 +0200 Subject: [Starlingx-discuss] [bitergia] Affiliations change requests for the Bitergia dashboard Message-ID: <0A745C05-B232-4AEB-B837-4F70175B154E@gmail.com> Hi, I’m reaching out to you as fixing affiliation information on the Bitergia dashboard (https://starlingx.biterg.io) we have for community metrics is still a manual process that require permissions. As mentioned on an earlier mail thread if you have your affiliation incorrectly in the tool please send a mail to the starlingx-discuss mailing list to get it fixed. __Please use the tag [bitergia] in the subject of your email so we don’t miss your request.__ Please let me know if you have any questions. Thanks, Ildikó From maria.g.perez.ibarra at intel.com Tue Jul 30 22:16:17 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 30 Jul 2019 22:16:17 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190730 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-30 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Jul 30 23:18:36 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 30 Jul 2019 23:18:36 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 7/30/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190724T013000Z" from (link) ---------------------------------------------------------------------- MANUAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 492 Pass = 371 Fail = 20 Blocked = 30 Not Run = 39 Obsolete = 26 Deferred = 6 Total executed = 421 Pass Rate = 94.88% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Regression - AIO-SX 26 PASS | 1 Obsolete Regression - Backup & Restore 6 Deferrer Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM 3 PASS Regression - HA 11 PASS | 1 FAIL Regression - Heat 12 PASS | 1 OBSOLETE Regression - Horizon 4 PASS Regression - Install and Config 5 PASS | 1 Fail Regression - Maintenance 8 PASS | 1 FAIL Regression - Networking 111 PASS | 2 FAIL | 11 BLOCKED | 19 OBSOLETE Regression - Nova 17 PASS | 7 FAIL Regression - Security 34 PASS | 1 FAIL | 6 BLOCKED | 1 OBSOLETE Regression - Storage 23 PASS |1 BLOCKED| 3 OBSOLETE Regression - Inventory 29 PASS | 1 FAIL System Test 20 PASS | 1 FAIL | 12 BLOCKED | 1 OBSOLETE Regression - new features 54 PASS | 5 FAIL --------------------------------------------------------------------------- AUTOMATED EXECUTION - INTEL --------------------------------------------------------------------------- Overall Results: Total = 235 Pass = 156 Fail = 66 Not Run = 13 Total executed = 222 Pass Rate = 70.27% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Fault-Management 15 PASS Gnocchi 12 PASS HEAT 6 PASS High-Availability 6 PASS | 4 FAIL Horizon 2 PASS Insallation-And-Config 6 PASS | 1 FAIL Maintenance 19 PASS | 7 FAIL Networking 35 PASS | 16 FAIL Nova 8 PASS | 11 FAIL Security 15 PASS | 8 FAIL Storage 1 PASS | 13 FAIL SYSINVENTORY 28 PASS | 2 FAIL System 3 PASS |4 FAIL ---------------------------------------------------------------------- AUTOMATED EXECUTION - Wind River ---------------------------------------------------------------------- "Pending results" ------------------------------------------------- Bugs: user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 VM fail to live migrate after evacuation https://bugs.launchpad.net/starlingx/+bug/1836402 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 CirrOS VM login takes too much time, and throw different log errors https://bugs.launchpad.net/starlingx/+bug/1835575 Live Migration Error: Failed to live migrate instance to host "AUTO_SCHEDULE". https://bugs.launchpad.net/starlingx/+bug/1837256 stx-openstack in apply-failed after lock/unlock standby controller https://bugs.launchpad.net/starlingx/+bug/1837581 403 error in horizon log when try to update the flavor metadata (and admin user is logged out) https://bugs.launchpad.net/starlingx/+bug/1821213 Create Volume dialog opens (from image panel in Horizon) but getting error default volume type can not be found https://bugs.launchpad.net/starlingx/+bug/1826259 instance creating via horizon failed https://bugs.launchpad.net/starlingx/+bug/1829925 Containers: openstack pods failed after force rebooting active controller https://bugs.launchpad.net/starlingx/+bug/1816842 After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" https://bugs.launchpad.net/starlingx/+bug/1836928 nova instance remnant left behind after cold migration completes https://bugs.launchpad.net/starlingx/+bug/1824858 disk_available_least value updates when instance moved but not to the value expected https://bugs.launchpad.net/nova/+bug/1834527 Containers: vm unreachable for minutes after live migration or vm reboot https://bugs.launchpad.net/starlingx/+bug/1818118 100.114 NTP alarm not cleared after swact https://bugs.launchpad.net/starlingx/+bug/1834071 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 after changing a setting of panko stx-openstack failed to reach 'applied' status after 1800 seconds https://bugs.launchpad.net/starlingx/+bug/1828056 AIO-DX Application apply aborted Unexpected process termination while application-apply was in progress https://bugs.launchpad.net/starlingx/+bug/1838101 Uncontrolled swact on standard system is slow https://bugs.launchpad.net/starlingx/+bug/1838411 tenant-mgmt-net not reachable from external network https://bugs.launchpad.net/starlingx/+bug/1836252 total bugs: 29 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?pli=1#gid=322455033 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From yan.chen at intel.com Wed Jul 31 01:24:31 2019 From: yan.chen at intel.com (Chen, Yan) Date: Wed, 31 Jul 2019 01:24:31 +0000 Subject: [Starlingx-discuss] Config-file approach to run Config-controller In-Reply-To: References: Message-ID: <72AD03D27224C74982BE13246D75B39739A3F659@SHSMSX103.ccr.corp.intel.com> Hi, You can try the attached sample files. Yan From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, July 30, 2019 19:03 To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] Config-file approach to run Config-controller Hi Team, I am using StarlingX 2018.10 Release and want to reduce the manual effort in running "config-controller". For this I found a way in which we can pass a Config-File as a parameter while running the "config-controller". localhost:~$ config_controller --help Usage: /usr/bin/config_controller Perform system configuration The default action is to perform the initial configuration for the system. The following options are also available: --config-file Perform configuration using INI file --backup Backup configuration using the given name --clone-iso Clone and create an image with the given file name --clone-status Status of the last installation of cloned image --restore-system Restore system configuration from backup file with the given name, full path required --restore-images Restore images from backup file with the given name, full path required --restore-complete Complete restore of controller-0--allow-ssh Allow configuration to be executed in ssh But I can't find any Sample config-file available in the documents. Also I found a bug https://bugs.launchpad.net/starlingx/+bug/1814833 which has issue in running the config_file. Can someone please tell if the method of passing config-file in supported in StarlingX 2018.10? If yes, can you please share a sample config-file? If the method is not supported, can someone please share a method to reduce the manual efforts and automate the inputs required in the config_controller? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: simplex.conf Type: application/octet-stream Size: 380 bytes Desc: simplex.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: duplex.conf Type: application/octet-stream Size: 709 bytes Desc: duplex.conf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: multi.conf Type: application/octet-stream Size: 647 bytes Desc: multi.conf URL: From yan.chen at intel.com Wed Jul 31 01:38:59 2019 From: yan.chen at intel.com (Chen, Yan) Date: Wed, 31 Jul 2019 01:38:59 +0000 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F36001A30@SHSMSX104.ccr.corp.intel.com> References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F36001A30@SHSMSX104.ccr.corp.intel.com> Message-ID: <72AD03D27224C74982BE13246D75B39739A3F69C@SHSMSX103.ccr.corp.intel.com> No, I think a ready CentOS means a standard CentOS system, which is not installed from StarlingX ISO. We didn’t try to install StarlingX on such a system. Yan -----Original Message----- From: Xie, Cindy Sent: Tuesday, July 30, 2019 20:52 To: 张鲲鹏 ; Victor Rodriguez ; Chen, Yan Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? I think it's doable. @Yan, did you ever try to run the provision scripts on a ready CentOS bare-metal? Thx. - cindy -----Original Message----- From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 30, 2019 6:10 PM To: Victor Rodriguez Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? Hi Victor, I haven’t started yet, because I don’t know how to do it. I'm glad you are performing this experiments, could you share your experience or documents? You know, booting from iso isn’t a well deployment in some cases. Recently, I studied the kickstart configs in bootimage.iso, and my idea is to operate the ready system according to the ks.cfg, but I’m not sure if it’s a right way. Thanks Kunpeng > On Jul 29, 2019, at 20:13, Victor Rodriguez wrote: > > Hi Kunpeng > > In the MultiOS subproject, we are performing such experiments, at the > moment with Open SUSE. We are facing some technical problems with the > runtime dependencies and the way the services start. Can you please > describe the steps you are following and what kind of specific > problems do you have? > > Thanks > > Victor Rodriguez > > > On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: >> >> Hi all, >> >> Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. >> >> Thanks >> Kunpeng >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From shuicheng.lin at intel.com Wed Jul 31 02:21:09 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 31 Jul 2019 02:21:09 +0000 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: <72AD03D27224C74982BE13246D75B39739A3F69C@SHSMSX103.ccr.corp.intel.com> References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F36001A30@SHSMSX104.ccr.corp.intel.com> <72AD03D27224C74982BE13246D75B39739A3F69C@SHSMSX103.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608BCB11@SHSMSX105.ccr.corp.intel.com> My concern is that, the CentOS kernel used in StarlingX is customized, it is not the same as the CentOS. So some feature/component may not work with a standard CentOS system. Best Regards Shuicheng -----Original Message----- From: Chen, Yan [mailto:yan.chen at intel.com] Sent: Wednesday, July 31, 2019 9:39 AM To: Xie, Cindy ; 张鲲鹏 ; Victor Rodriguez Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? No, I think a ready CentOS means a standard CentOS system, which is not installed from StarlingX ISO. We didn’t try to install StarlingX on such a system. Yan -----Original Message----- From: Xie, Cindy Sent: Tuesday, July 30, 2019 20:52 To: 张鲲鹏 ; Victor Rodriguez ; Chen, Yan Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? I think it's doable. @Yan, did you ever try to run the provision scripts on a ready CentOS bare-metal? Thx. - cindy -----Original Message----- From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 30, 2019 6:10 PM To: Victor Rodriguez Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? Hi Victor, I haven’t started yet, because I don’t know how to do it. I'm glad you are performing this experiments, could you share your experience or documents? You know, booting from iso isn’t a well deployment in some cases. Recently, I studied the kickstart configs in bootimage.iso, and my idea is to operate the ready system according to the ks.cfg, but I’m not sure if it’s a right way. Thanks Kunpeng > On Jul 29, 2019, at 20:13, Victor Rodriguez wrote: > > Hi Kunpeng > > In the MultiOS subproject, we are performing such experiments, at the > moment with Open SUSE. We are facing some technical problems with the > runtime dependencies and the way the services start. Can you please > describe the steps you are following and what kind of specific > problems do you have? > > Thanks > > Victor Rodriguez > > > On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: >> >> Hi all, >> >> Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. >> >> Thanks >> Kunpeng >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhang.kunpeng at 99cloud.net Wed Jul 31 03:44:10 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Wed, 31 Jul 2019 11:44:10 +0800 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: <9700A18779F35F49AF027300A49E7C76608BCB11@SHSMSX105.ccr.corp.intel.com> References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F36001A30@SHSMSX104.ccr.corp.intel.com> <72AD03D27224C74982BE13246D75B39739A3F69C@SHSMSX103.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608BCB11@SHSMSX105.ccr.corp.intel.com> Message-ID: What are the differences between StarlingX kernel and standard CentOS kernel? On the other hand, which flock services are affected? I mean, is the StarlingX complete if those based services(eg,docker/libvirt/ceph) and flock services(eg, fm) are installed and launched, and the risk is that the flock services cannot be launched well in standard kernel? Thanks Kunpeng > On Jul 31, 2019, at 10:21, Lin, Shuicheng wrote: > > My concern is that, the CentOS kernel used in StarlingX is customized, it is not the same as the CentOS. > So some feature/component may not work with a standard CentOS system. > > Best Regards > Shuicheng > > -----Original Message----- > From: Chen, Yan [mailto:yan.chen at intel.com] > Sent: Wednesday, July 31, 2019 9:39 AM > To: Xie, Cindy ; 张鲲鹏 ; Victor Rodriguez > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > No, I think a ready CentOS means a standard CentOS system, which is not installed from StarlingX ISO. > We didn’t try to install StarlingX on such a system. > > > Yan > > -----Original Message----- > From: Xie, Cindy > Sent: Tuesday, July 30, 2019 20:52 > To: 张鲲鹏 ; Victor Rodriguez ; Chen, Yan > Cc: starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > I think it's doable. @Yan, did you ever try to run the provision scripts on a ready CentOS bare-metal? > > Thx. - cindy > > -----Original Message----- > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Tuesday, July 30, 2019 6:10 PM > To: Victor Rodriguez > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > Hi Victor, > > I haven’t started yet, because I don’t know how to do it. I'm glad you are performing this experiments, could you share your experience or documents? You know, booting from iso isn’t a well deployment in some cases. > Recently, I studied the kickstart configs in bootimage.iso, and my idea is to operate the ready system according to the ks.cfg, but I’m not sure if it’s a right way. > > Thanks > Kunpeng > >> On Jul 29, 2019, at 20:13, Victor Rodriguez wrote: >> >> Hi Kunpeng >> >> In the MultiOS subproject, we are performing such experiments, at the >> moment with Open SUSE. We are facing some technical problems with the >> runtime dependencies and the way the services start. Can you please >> describe the steps you are following and what kind of specific >> problems do you have? >> >> Thanks >> >> Victor Rodriguez >> >> >> On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: >>> >>> Hi all, >>> >>> Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. >>> >>> Thanks >>> Kunpeng >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From shuicheng.lin at intel.com Wed Jul 31 06:32:00 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 31 Jul 2019 06:32:00 +0000 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F36001A30@SHSMSX104.ccr.corp.intel.com> <72AD03D27224C74982BE13246D75B39739A3F69C@SHSMSX103.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608BCB11@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608BCC84@SHSMSX105.ccr.corp.intel.com> Hi Kunpeng, You could find kernel patches in below link. Most of them are performance/bug related. https://opendev.org/starlingx/integ/src/branch/master/kernel/kernel-std/centos/patches Sorry, but I don’t have data which flock services will be affected by it. Best Regards Shuicheng -----Original Message----- From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Wednesday, July 31, 2019 11:44 AM To: Lin, Shuicheng ; Chen, Yan ; Xie, Cindy ; Victor Rodriguez Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? What are the differences between StarlingX kernel and standard CentOS kernel? On the other hand, which flock services are affected? I mean, is the StarlingX complete if those based services(eg,docker/libvirt/ceph) and flock services(eg, fm) are installed and launched, and the risk is that the flock services cannot be launched well in standard kernel? Thanks Kunpeng > On Jul 31, 2019, at 10:21, Lin, Shuicheng wrote: > > My concern is that, the CentOS kernel used in StarlingX is customized, it is not the same as the CentOS. > So some feature/component may not work with a standard CentOS system. > > Best Regards > Shuicheng > > -----Original Message----- > From: Chen, Yan [mailto:yan.chen at intel.com] > Sent: Wednesday, July 31, 2019 9:39 AM > To: Xie, Cindy ; 张鲲鹏 ; > Victor Rodriguez > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > No, I think a ready CentOS means a standard CentOS system, which is not installed from StarlingX ISO. > We didn’t try to install StarlingX on such a system. > > > Yan > > -----Original Message----- > From: Xie, Cindy > Sent: Tuesday, July 30, 2019 20:52 > To: 张鲲鹏 ; Victor Rodriguez > ; Chen, Yan > Cc: starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > I think it's doable. @Yan, did you ever try to run the provision scripts on a ready CentOS bare-metal? > > Thx. - cindy > > -----Original Message----- > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Tuesday, July 30, 2019 6:10 PM > To: Victor Rodriguez > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > Hi Victor, > > I haven’t started yet, because I don’t know how to do it. I'm glad you are performing this experiments, could you share your experience or documents? You know, booting from iso isn’t a well deployment in some cases. > Recently, I studied the kickstart configs in bootimage.iso, and my idea is to operate the ready system according to the ks.cfg, but I’m not sure if it’s a right way. > > Thanks > Kunpeng > >> On Jul 29, 2019, at 20:13, Victor Rodriguez wrote: >> >> Hi Kunpeng >> >> In the MultiOS subproject, we are performing such experiments, at the >> moment with Open SUSE. We are facing some technical problems with the >> runtime dependencies and the way the services start. Can you please >> describe the steps you are following and what kind of specific >> problems do you have? >> >> Thanks >> >> Victor Rodriguez >> >> >> On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: >>> >>> Hi all, >>> >>> Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. >>> >>> Thanks >>> Kunpeng >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhang.kunpeng at 99cloud.net Wed Jul 31 09:18:56 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Wed, 31 Jul 2019 17:18:56 +0800 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: <9700A18779F35F49AF027300A49E7C76608BCC84@SHSMSX105.ccr.corp.intel.com> References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F36001A30@SHSMSX104.ccr.corp.intel.com> <72AD03D27224C74982BE13246D75B39739A3F69C@SHSMSX103.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608BCB11@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608BCC84@SHSMSX105.ccr.corp.intel.com> Message-ID: Emmmmmmm, I think it’s hard for me to study these so many kernel patches. By the way, I have a naive question that what are the meanings of "kernel-rt" and "kernel-std”. Thanks Kunpeng > On Jul 31, 2019, at 14:32, Lin, Shuicheng wrote: > > Hi Kunpeng, > You could find kernel patches in below link. Most of them are performance/bug related. > https://opendev.org/starlingx/integ/src/branch/master/kernel/kernel-std/centos/patches > Sorry, but I don’t have data which flock services will be affected by it. > > > Best Regards > Shuicheng > > > -----Original Message----- > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Wednesday, July 31, 2019 11:44 AM > To: Lin, Shuicheng ; Chen, Yan ; Xie, Cindy ; Victor Rodriguez > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > What are the differences between StarlingX kernel and standard CentOS kernel? On the other hand, which flock services are affected? > I mean, is the StarlingX complete if those based services(eg,docker/libvirt/ceph) and flock services(eg, fm) are installed and launched, and the risk is that the flock services cannot be launched well in standard kernel? > > Thanks > Kunpeng > >> On Jul 31, 2019, at 10:21, Lin, Shuicheng wrote: >> >> My concern is that, the CentOS kernel used in StarlingX is customized, it is not the same as the CentOS. >> So some feature/component may not work with a standard CentOS system. >> >> Best Regards >> Shuicheng >> >> -----Original Message----- >> From: Chen, Yan [mailto:yan.chen at intel.com] >> Sent: Wednesday, July 31, 2019 9:39 AM >> To: Xie, Cindy ; 张鲲鹏 ; >> Victor Rodriguez >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? >> >> No, I think a ready CentOS means a standard CentOS system, which is not installed from StarlingX ISO. >> We didn’t try to install StarlingX on such a system. >> >> >> Yan >> >> -----Original Message----- >> From: Xie, Cindy >> Sent: Tuesday, July 30, 2019 20:52 >> To: 张鲲鹏 ; Victor Rodriguez >> ; Chen, Yan >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? >> >> I think it's doable. @Yan, did you ever try to run the provision scripts on a ready CentOS bare-metal? >> >> Thx. - cindy >> >> -----Original Message----- >> From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] >> Sent: Tuesday, July 30, 2019 6:10 PM >> To: Victor Rodriguez >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? >> >> Hi Victor, >> >> I haven’t started yet, because I don’t know how to do it. I'm glad you are performing this experiments, could you share your experience or documents? You know, booting from iso isn’t a well deployment in some cases. >> Recently, I studied the kickstart configs in bootimage.iso, and my idea is to operate the ready system according to the ks.cfg, but I’m not sure if it’s a right way. >> >> Thanks >> Kunpeng >> >>> On Jul 29, 2019, at 20:13, Victor Rodriguez wrote: >>> >>> Hi Kunpeng >>> >>> In the MultiOS subproject, we are performing such experiments, at the >>> moment with Open SUSE. We are facing some technical problems with the >>> runtime dependencies and the way the services start. Can you please >>> describe the steps you are following and what kind of specific >>> problems do you have? >>> >>> Thanks >>> >>> Victor Rodriguez >>> >>> >>> On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: >>>> >>>> Hi all, >>>> >>>> Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. >>>> >>>> Thanks >>>> Kunpeng >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Jul 31 12:32:50 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 31 Jul 2019 12:32:50 +0000 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F36001A30@SHSMSX104.ccr.corp.intel.com> <72AD03D27224C74982BE13246D75B39739A3F69C@SHSMSX103.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608BCB11@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608BCC84@SHSMSX105.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F36002F99@SHSMSX104.ccr.corp.intel.com> Kernel-rt: real time Linux kernel; Kernel-std: standard kernel. -----Original Message----- From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Wednesday, July 31, 2019 5:19 PM To: Lin, Shuicheng Cc: Chen, Yan ; Xie, Cindy ; Victor Rodriguez ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? Emmmmmmm, I think it’s hard for me to study these so many kernel patches. By the way, I have a naive question that what are the meanings of "kernel-rt" and "kernel-std”. Thanks Kunpeng > On Jul 31, 2019, at 14:32, Lin, Shuicheng wrote: > > Hi Kunpeng, > You could find kernel patches in below link. Most of them are performance/bug related. > https://opendev.org/starlingx/integ/src/branch/master/kernel/kernel-st > d/centos/patches Sorry, but I don’t have data which flock services > will be affected by it. > > > Best Regards > Shuicheng > > > -----Original Message----- > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Wednesday, July 31, 2019 11:44 AM > To: Lin, Shuicheng ; Chen, Yan > ; Xie, Cindy ; Victor > Rodriguez > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > What are the differences between StarlingX kernel and standard CentOS kernel? On the other hand, which flock services are affected? > I mean, is the StarlingX complete if those based services(eg,docker/libvirt/ceph) and flock services(eg, fm) are installed and launched, and the risk is that the flock services cannot be launched well in standard kernel? > > Thanks > Kunpeng > >> On Jul 31, 2019, at 10:21, Lin, Shuicheng wrote: >> >> My concern is that, the CentOS kernel used in StarlingX is customized, it is not the same as the CentOS. >> So some feature/component may not work with a standard CentOS system. >> >> Best Regards >> Shuicheng >> >> -----Original Message----- >> From: Chen, Yan [mailto:yan.chen at intel.com] >> Sent: Wednesday, July 31, 2019 9:39 AM >> To: Xie, Cindy ; 张鲲鹏 >> ; Victor Rodriguez >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? >> >> No, I think a ready CentOS means a standard CentOS system, which is not installed from StarlingX ISO. >> We didn’t try to install StarlingX on such a system. >> >> >> Yan >> >> -----Original Message----- >> From: Xie, Cindy >> Sent: Tuesday, July 30, 2019 20:52 >> To: 张鲲鹏 ; Victor Rodriguez >> ; Chen, Yan >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? >> >> I think it's doable. @Yan, did you ever try to run the provision scripts on a ready CentOS bare-metal? >> >> Thx. - cindy >> >> -----Original Message----- >> From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] >> Sent: Tuesday, July 30, 2019 6:10 PM >> To: Victor Rodriguez >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? >> >> Hi Victor, >> >> I haven’t started yet, because I don’t know how to do it. I'm glad you are performing this experiments, could you share your experience or documents? You know, booting from iso isn’t a well deployment in some cases. >> Recently, I studied the kickstart configs in bootimage.iso, and my idea is to operate the ready system according to the ks.cfg, but I’m not sure if it’s a right way. >> >> Thanks >> Kunpeng >> >>> On Jul 29, 2019, at 20:13, Victor Rodriguez wrote: >>> >>> Hi Kunpeng >>> >>> In the MultiOS subproject, we are performing such experiments, at >>> the moment with Open SUSE. We are facing some technical problems >>> with the runtime dependencies and the way the services start. Can >>> you please describe the steps you are following and what kind of >>> specific problems do you have? >>> >>> Thanks >>> >>> Victor Rodriguez >>> >>> >>> On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: >>>> >>>> Hi all, >>>> >>>> Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. >>>> >>>> Thanks >>>> Kunpeng >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ian.Jolliffe at windriver.com Wed Jul 31 13:52:42 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Wed, 31 Jul 2019 13:52:42 +0000 Subject: [Starlingx-discuss] [TSC] Minutes from 7/25 meeting Message-ID: <265D9713-3A04-4A20-B4C9-56AF35AC4E3E@windriver.com> 7/25 - Meeting ============= TSC feature backlog (Bill) - recommend Google Sheets for now, with an option to revisit Launchpad Blueprints (or other) in future - (ildikov) Can the specs repo folder and Storyboard tag combination fulfill what we need in terms of a backlog? - Google as a temporary solution - need to look for an alternative that is accessible globally Shanghai room requests - on boarding and PTG - 1 Day max - expecting fewer people from NA to attend - We should do an onboarding session - a 40 min slot should be sufficient Project confirmation - Ildiko will send a message to the ML inviting input from the community - Discuss next meeting Election round 2 - Ian will put together a proposal for timing for next week - Hold elections in Sep/Oct to get closer to the original envisioned timelines of near the end of first and third quarters - Possibility of running the TSC and the PL/TL elections concurrently 7/18 - Meeting ============ Election learnings and dates for second round of elections (Ildiko) https://etherpad.openstack.org/p/StarlingX_Election_Process Action: TSC to close on timeline for the elections by the end of the month Need to consider if we want befor or after the summit and consider the release3 date as well. Project confirmation - needs to be discussed as we head into the second half of the year (Ildiko) https://wiki.openstack.org/w/index.php?title=Governance/Foundation/OSFProjectConfirmationGuidelines https://etherpad.openstack.org/p/stx-confirmation-readiness-assessment-q2-2019 Action: TSC to do a self evaluation against the criteria Shanghai PTG room request (Ildiko) On boarding sessions are now part of the PTG Need time and estimate on number of participants https://openstackfoundation.formstack.com/forms/shanghai_ptg_survey Action: Need to know if the project wants to run onboarding and time required at the ptg. Request input for next PTG meeting Python 2 - > 3 technical churn - Saul Decision: out for R3. Align with centos8 which tenatively is R4 Centos 8 would not have python2 Additional R3 Feature Candidate - Brent Unreviewed items from 7/11 meeting Layered Build Decision: In, spec in progress Support remote authentication via OIDC / OAUTH2 for k8s api Decision: in From James.Gauld at windriver.com Wed Jul 31 13:54:59 2019 From: James.Gauld at windriver.com (Gauld, James) Date: Wed, 31 Jul 2019 13:54:59 +0000 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In-Reply-To: References: Message-ID: <8E5740EC88EF3E4BA3196F2545DC8625C12A73C7@ALA-MBD.corp.ad.wrs.com> Is anyone having problems with openstack commands on recent build? I installed a build from late last night, I did "system application-apply stx-openstack" and that was successful, and everything seems running. I let it sit for several hours. I created the clouds.yaml as per usual old Wiki. Then cannot continue with the "Verify the cluster endpoints" step. I cannot do "openstack endpoint list". At first I was getting: controller-0:~$ openstack endpoint list The request you have made requires authentication. (HTTP 401) (Request-ID: req-2e791341-d5e0-4064-9962-eeae6115121d) Now I am getting : controller-0:~# export OS_CLOUD=openstack_helm controller-0:~# openstack endpoint list The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. (HTTP 401) (Request-ID: req-c3e92a3d-b9fa-4cce-92bb-a84d79ed79b0) And logging out with fresh environment: controller-0:~$ export OS_CLOUD=openstack_helm controller-0:~$ openstack endpoint list The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) The horizon openstack GUI won't allow me to login, message is "invalid credentials". -jim From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: July-22-19 10:52 AM To: Anirudh Gupta; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In order to see the endpoints for nova, neutron, etc.. you need /etc/openstack/clouds.yaml file to be setup. The steps you need are referenced in this (deprecated) document. https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints It will also show the appropriate commands for the provider/tenant networking setup, until the official doc is synced with that wiki. Al From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Friday, July 19, 2019 12:13 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Hi Team, I am following the below document to set up AIO-Simplex R2.0 with the green build dated 17-July. https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html I have successfully verified the endpoints, using the command openstack endpoint list Issue 1 :- The endpoint list contains endpoint of the services fm,patching,vim,smapi,keystone,barbacian and sysinv. The other basic openstack services are visible if I run kubectl get services -n openstack But, there are no endpoints of nova,neutron,glance and all other openstack services? Issue 2 :- I am unable to proceed further with the set up Provider/tenant networking setup [root at controller-0 sysadmin(keystone_admin)]# neutron providernet-create ${PHYSNET0} --type vlan neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Unknown command [u'providernet-create', u'--type', u'vlan'] What could be the solution to proceed further? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jul 31 14:07:01 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 31 Jul 2019 14:07:01 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F360033F2@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 7/31 meeting: 1. Bug triage & review, re-prioritize LPs (Cindy/Saul/Brent) - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage Suggestion from Tingjie: to collect log file from test system. Standard process required: test report template or must-have log? AR: Tingjie to send a list of "incomplete" LP with info missing to Bill. 4 LP are currently under storage domain. - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other Team reviewed LP in both domains, Priority has been updated lively in Launchpad. 2. Sanity test status for kernel minor upgrade (Shuai) deployment testing in Dalian ODC; passed on both RT and STD kernel. sanity auto testing in Shanghai lab. AIO-SX VE sanity testing auto-test failed but run it manually is passing. Debugging the scripts. AIO-DX sanity auto-testing failed suspect the test scripts issue. Multi-node auto sanity WIP. 3. Ceph containerization plan review (Tingjie) - story: https://storyboard.openstack.org/#!/story/2005527 Tingjie presented the design and his plan for 4 milestones. Two more opens: - if we do not cut-over to containerized Ceph in 3.0; how to have some feature in but not to break the current ceph functionality. - Brent has one review comments in Tingjie's spec regarding SW upgrade (backward compatibility for containerzied Ceph and mimic). AR: Tingjie will continue working on spec and design to address the concern. the plan will be refined further. 4. Opens (all) - None -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; Wold, Saul; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; 'zhaos' Cc: 'Seiler, Glenn'; Hu, Wei W; Peng Tan; Gomez, Juan P; 'Waines, Greg'; 'Eslimi, Dariush'; Jones, Bruce E; 'Zhi Zhi2 Chang'; Chen, Tingjie; 'Badea, Daniel'; 'Chen, Jacky'; 'Komiyama, Takeo'; Armstrong, Robert H; 'Carlos Cebrian'; Cobbley, David A Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 31, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From andy.ning at windriver.com Wed Jul 31 14:36:24 2019 From: andy.ning at windriver.com (Andy Ning) Date: Wed, 31 Jul 2019 10:36:24 -0400 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In-Reply-To: <8E5740EC88EF3E4BA3196F2545DC8625C12A73C7@ALA-MBD.corp.ad.wrs.com> References: <8E5740EC88EF3E4BA3196F2545DC8625C12A73C7@ALA-MBD.corp.ad.wrs.com> Message-ID: FYI, I usually don't use OS_CLOUD, but just source /etc/platform/openrc, and change OS_AUTH_URL to point to the containerized keystone: OS_AUTH_URL=http://keystone.openstack.svc.cluster.local:80/v3 This works for me for a older load. But when I tried this in a lab I installed yesterday, it doesn't work. There might be a real issue here. Andy On 2019-07-31 09:54 AM, Gauld, James wrote: > > Is anyone having problems with openstack commands on recent build? > > I installed a build from late last night, I did "system > application-apply stx-openstack" and that was successful, and > everything seems running. I let it sit for several hours. I created > the clouds.yaml as per usual old Wiki. Then cannot continue with the > "Verify the cluster endpoints" step. I cannot do "openstack endpoint > list". > > At first I was getting: > > controller-0:~$ openstack endpoint list > > The request you have made requires authentication. (HTTP 401) > (Request-ID: req-2e791341-d5e0-4064-9962-eeae6115121d) > > Now I am getting : > > controller-0:~# export OS_CLOUD=openstack_helm > > controller-0:~# openstack endpoint list > > The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. > (HTTP 401) (Request-ID: req-c3e92a3d-b9fa-4cce-92bb-a84d79ed79b0) > > And logging out with fresh environment: > > controller-0:~$ export OS_CLOUD=openstack_helm > > controller-0:~$ openstack endpoint list > > The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. > (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) > > The horizon openstack GUI won't allow me to login, message is "invalid > credentials". > > -jim > > *From:*Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] > *Sent:* July-22-19 10:52 AM > *To:* Anirudh Gupta; starlingx-discuss at lists.starlingx.io; > starlingx-announce at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green > Build dated 17Th July 2019 > > In order to see the endpoints for nova, neutron, etc.. you need > /etc/openstack/clouds.yaml file to be setup. > > The steps you need are referenced in this (deprecated) document. > > https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints > > It will also show the appropriate commands for the provider/tenant > networking setup, until the official doc is synced with that wiki. > > Al > > *From:*Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] > *Sent:* Friday, July 19, 2019 12:13 AM > *To:* starlingx-discuss at lists.starlingx.io; > starlingx-announce at lists.starlingx.io > *Subject:* [Starlingx-discuss] StarlingX R2.0 Issues against Green > Build dated 17Th July 2019 > > Hi Team, > > I am following the below document to set up AIO-Simplex R2.0 with the > green build dated 17-July. > > https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html > > I have successfully verified the endpoints, using the command > > /openstack endpoint list/ > > *Issue 1 :- * > > The endpoint list contains endpoint of the services > fm,patching,vim,smapi,keystone,barbacian and sysinv. > > The other basic openstack services are visible if I run > > /kubectl get services -n openstack/ > > But, there are no endpoints of nova,neutron,glance and all other > openstack services? > > *Issue 2 :-* > > I am unable to proceed further with the set up Provider/tenant > networking setup > ¶ > > > [root at controller-0 sysadmin(keystone_admin)]# neutron > providernet-create ${PHYSNET0} --type vlan > > neutron CLI is deprecated and will be removed in the future. Use > openstack CLI instead. > > Unknown command [u'providernet-create', u'--type', u'vlan'] > > What could be the solution to proceed further? > > Regards > > Anirudh Gupta > > DISCLAIMER: This electronic message and all of its contents, contains > information which is privileged, confidential or otherwise protected > from disclosure. The information contained in this electronic mail > transmission is intended for use only by the individual or entity to > which it is addressed. If you are not the intended recipient or may > have received this electronic mail transmission in error, please > notify the sender immediately and delete / destroy all copies of this > electronic mail transmission without disclosing, copying, > distributing, forwarding, printing or retaining any part of it. Hughes > Systique accepts no responsibility for loss or damage arising from the > use of the information transmitted by this email including damage from > virus. > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From James.Gauld at windriver.com Wed Jul 31 14:40:55 2019 From: James.Gauld at windriver.com (Gauld, James) Date: Wed, 31 Jul 2019 14:40:55 +0000 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 References: Message-ID: <8E5740EC88EF3E4BA3196F2545DC8625C12A7408@ALA-MBD.corp.ad.wrs.com> Addendum, If I create a modified version of /etc/platform/openrc, and change the following line OS_AUTH_URL, source that file, I can successfully run openstack commands from that shell: export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 Something not working for me with the /etc/openstack/clouds.yaml, and horizon openstack GUI. -jim From: Gauld, James Sent: July-31-19 9:55 AM To: Bailey, Henry Albert (Al); Anirudh Gupta; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Is anyone having problems with openstack commands on recent build? I installed a build from late last night, I did "system application-apply stx-openstack" and that was successful, and everything seems running. I let it sit for several hours. I created the clouds.yaml as per usual old Wiki. Then cannot continue with the "Verify the cluster endpoints" step. I cannot do "openstack endpoint list". At first I was getting: controller-0:~$ openstack endpoint list The request you have made requires authentication. (HTTP 401) (Request-ID: req-2e791341-d5e0-4064-9962-eeae6115121d) Now I am getting : controller-0:~# export OS_CLOUD=openstack_helm controller-0:~# openstack endpoint list The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. (HTTP 401) (Request-ID: req-c3e92a3d-b9fa-4cce-92bb-a84d79ed79b0) And logging out with fresh environment: controller-0:~$ export OS_CLOUD=openstack_helm controller-0:~$ openstack endpoint list The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) The horizon openstack GUI won't allow me to login, message is "invalid credentials". -jim From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: July-22-19 10:52 AM To: Anirudh Gupta; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In order to see the endpoints for nova, neutron, etc.. you need /etc/openstack/clouds.yaml file to be setup. The steps you need are referenced in this (deprecated) document. https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints It will also show the appropriate commands for the provider/tenant networking setup, until the official doc is synced with that wiki. Al From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Friday, July 19, 2019 12:13 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Hi Team, I am following the below document to set up AIO-Simplex R2.0 with the green build dated 17-July. https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html I have successfully verified the endpoints, using the command openstack endpoint list Issue 1 :- The endpoint list contains endpoint of the services fm,patching,vim,smapi,keystone,barbacian and sysinv. The other basic openstack services are visible if I run kubectl get services -n openstack But, there are no endpoints of nova,neutron,glance and all other openstack services? Issue 2 :- I am unable to proceed further with the set up Provider/tenant networking setup [root at controller-0 sysadmin(keystone_admin)]# neutron providernet-create ${PHYSNET0} --type vlan neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Unknown command [u'providernet-create', u'--type', u'vlan'] What could be the solution to proceed further? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.ning at windriver.com Wed Jul 31 14:45:35 2019 From: andy.ning at windriver.com (Andy Ning) Date: Wed, 31 Jul 2019 10:45:35 -0400 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In-Reply-To: <8E5740EC88EF3E4BA3196F2545DC8625C12A7408@ALA-MBD.corp.ad.wrs.com> References: <8E5740EC88EF3E4BA3196F2545DC8625C12A7408@ALA-MBD.corp.ad.wrs.com> Message-ID: On 2019-07-31 10:40 AM, Gauld, James wrote: > > Addendum, > > If I create a modified version of /etc/platform/openrc, and change the > following line OS_AUTH_URL, source that file, I can successfully run > openstack commands from that shell: > > export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 > > Something not working for me with the /etc/openstack/clouds.yaml, and > horizon openstack GUI. > I checked a lab with load BUILD_ID="20190729T233000Z", and I don't see this file exist. Are you sure it exist in your lab? Andy > -jim > > *From:*Gauld, James > *Sent:* July-31-19 9:55 AM > *To:* Bailey, Henry Albert (Al); Anirudh Gupta; > starlingx-discuss at lists.starlingx.io; > starlingx-announce at lists.starlingx.io > *Subject:* RE: [Starlingx-discuss] StarlingX R2.0 Issues against Green > Build dated 17Th July 2019 > > Is anyone having problems with openstack commands on recent build? > > I installed a build from late last night, I did "system > application-apply stx-openstack" and that was successful, and > everything seems running. I let it sit for several hours. I created > the clouds.yaml as per usual old Wiki. Then cannot continue with the > "Verify the cluster endpoints" step. I cannot do "openstack endpoint > list". > > At first I was getting: > > controller-0:~$ openstack endpoint list > > The request you have made requires authentication. (HTTP 401) > (Request-ID: req-2e791341-d5e0-4064-9962-eeae6115121d) > > Now I am getting : > > controller-0:~# export OS_CLOUD=openstack_helm > > controller-0:~# openstack endpoint list > > The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. > (HTTP 401) (Request-ID: req-c3e92a3d-b9fa-4cce-92bb-a84d79ed79b0) > > And logging out with fresh environment: > > controller-0:~$ export OS_CLOUD=openstack_helm > > controller-0:~$ openstack endpoint list > > The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. > (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) > > The horizon openstack GUI won't allow me to login, message is "invalid > credentials". > > -jim > > *From:*Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] > *Sent:* July-22-19 10:52 AM > *To:* Anirudh Gupta; starlingx-discuss at lists.starlingx.io; > starlingx-announce at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green > Build dated 17Th July 2019 > > In order to see the endpoints for nova, neutron, etc.. you need > /etc/openstack/clouds.yaml file to be setup. > > The steps you need are referenced in this (deprecated) document. > > https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints > > It will also show the appropriate commands for the provider/tenant > networking setup, until the official doc is synced with that wiki. > > Al > > *From:*Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] > *Sent:* Friday, July 19, 2019 12:13 AM > *To:* starlingx-discuss at lists.starlingx.io; > starlingx-announce at lists.starlingx.io > *Subject:* [Starlingx-discuss] StarlingX R2.0 Issues against Green > Build dated 17Th July 2019 > > Hi Team, > > I am following the below document to set up AIO-Simplex R2.0 with the > green build dated 17-July. > > https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html > > I have successfully verified the endpoints, using the command > > /openstack endpoint list/ > > *Issue 1 :- * > > The endpoint list contains endpoint of the services > fm,patching,vim,smapi,keystone,barbacian and sysinv. > > The other basic openstack services are visible if I run > > /kubectl get services -n openstack/ > > But, there are no endpoints of nova,neutron,glance and all other > openstack services? > > *Issue 2 :-* > > I am unable to proceed further with the set up Provider/tenant > networking setup > ¶ > > > [root at controller-0 sysadmin(keystone_admin)]# neutron > providernet-create ${PHYSNET0} --type vlan > > neutron CLI is deprecated and will be removed in the future. Use > openstack CLI instead. > > Unknown command [u'providernet-create', u'--type', u'vlan'] > > What could be the solution to proceed further? > > Regards > > Anirudh Gupta > > DISCLAIMER: This electronic message and all of its contents, contains > information which is privileged, confidential or otherwise protected > from disclosure. The information contained in this electronic mail > transmission is intended for use only by the individual or entity to > which it is addressed. If you are not the intended recipient or may > have received this electronic mail transmission in error, please > notify the sender immediately and delete / destroy all copies of this > electronic mail transmission without disclosing, copying, > distributing, forwarding, printing or retaining any part of it. Hughes > Systique accepts no responsibility for loss or damage arising from the > use of the information transmitted by this email including damage from > virus. > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.ning at windriver.com Wed Jul 31 14:49:10 2019 From: andy.ning at windriver.com (Andy Ning) Date: Wed, 31 Jul 2019 10:49:10 -0400 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In-Reply-To: References: <8E5740EC88EF3E4BA3196F2545DC8625C12A7408@ALA-MBD.corp.ad.wrs.com> Message-ID: On 2019-07-31 10:45 AM, Andy Ning wrote: > > > > On 2019-07-31 10:40 AM, Gauld, James wrote: >> >> Addendum, >> >> If I create a modified version of /etc/platform/openrc, and change >> the following line OS_AUTH_URL, source that file, I can successfully >> run openstack commands from that shell: >> >> export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 >> >> Something not working for me with the /etc/openstack/clouds.yaml, and >> horizon openstack GUI. >> > > I checked a lab with load BUILD_ID="20190729T233000Z", and I don't see > this file exist. Are you sure it exist in your lab? > Sorry, I realized that you have to create such a file ... > Andy > >> -jim >> >> *From:*Gauld, James >> *Sent:* July-31-19 9:55 AM >> *To:* Bailey, Henry Albert (Al); Anirudh Gupta; >> starlingx-discuss at lists.starlingx.io; >> starlingx-announce at lists.starlingx.io >> *Subject:* RE: [Starlingx-discuss] StarlingX R2.0 Issues against >> Green Build dated 17Th July 2019 >> >> Is anyone having problems with openstack commands on recent build? >> >> I installed a build from late last night, I did "system >> application-apply stx-openstack" and that was successful, and >> everything seems running. I let it sit for several hours. I created >> the clouds.yaml as per usual old Wiki. Then cannot continue with >> the "Verify the cluster endpoints" step. I cannot do "openstack >> endpoint list". >> >> At first I was getting: >> >> controller-0:~$ openstack endpoint list >> >> The request you have made requires authentication. (HTTP 401) >> (Request-ID: req-2e791341-d5e0-4064-9962-eeae6115121d) >> >> Now I am getting : >> >> controller-0:~# export OS_CLOUD=openstack_helm >> >> controller-0:~# openstack endpoint list >> >> The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. >> (HTTP 401) (Request-ID: req-c3e92a3d-b9fa-4cce-92bb-a84d79ed79b0) >> >> And logging out with fresh environment: >> >> controller-0:~$ export OS_CLOUD=openstack_helm >> >> controller-0:~$ openstack endpoint list >> >> The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. >> (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) >> >> The horizon openstack GUI won't allow me to login, message is >> "invalid credentials". >> >> -jim >> >> *From:*Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] >> *Sent:* July-22-19 10:52 AM >> *To:* Anirudh Gupta; starlingx-discuss at lists.starlingx.io; >> starlingx-announce at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] StarlingX R2.0 Issues against >> Green Build dated 17Th July 2019 >> >> In order to see the endpoints for nova, neutron, etc.. you need >> /etc/openstack/clouds.yaml file to be setup. >> >> The steps you need are referenced in this (deprecated) document. >> >> https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints >> >> It will also show the appropriate commands for the provider/tenant >> networking setup, until the official doc is synced with that wiki. >> >> Al >> >> *From:*Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] >> *Sent:* Friday, July 19, 2019 12:13 AM >> *To:* starlingx-discuss at lists.starlingx.io; >> starlingx-announce at lists.starlingx.io >> *Subject:* [Starlingx-discuss] StarlingX R2.0 Issues against Green >> Build dated 17Th July 2019 >> >> Hi Team, >> >> I am following the below document to set up AIO-Simplex R2.0 with the >> green build dated 17-July. >> >> https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html >> >> I have successfully verified the endpoints, using the command >> >> /openstack endpoint list/ >> >> *Issue 1 :- * >> >> The endpoint list contains endpoint of the services >> fm,patching,vim,smapi,keystone,barbacian and sysinv. >> >> The other basic openstack services are visible if I run >> >> /kubectl get services -n openstack/ >> >> But, there are no endpoints of nova,neutron,glance and all other >> openstack services? >> >> *Issue 2 :-* >> >> I am unable to proceed further with the set up Provider/tenant >> networking setup >> ¶ >> >> >> [root at controller-0 sysadmin(keystone_admin)]# neutron >> providernet-create ${PHYSNET0} --type vlan >> >> neutron CLI is deprecated and will be removed in the future. Use >> openstack CLI instead. >> >> Unknown command [u'providernet-create', u'--type', u'vlan'] >> >> What could be the solution to proceed further? >> >> Regards >> >> Anirudh Gupta >> >> DISCLAIMER: This electronic message and all of its contents, contains >> information which is privileged, confidential or otherwise protected >> from disclosure. The information contained in this electronic mail >> transmission is intended for use only by the individual or entity to >> which it is addressed. If you are not the intended recipient or may >> have received this electronic mail transmission in error, please >> notify the sender immediately and delete / destroy all copies of this >> electronic mail transmission without disclosing, copying, >> distributing, forwarding, printing or retaining any part of it. >> Hughes Systique accepts no responsibility for loss or damage arising >> from the use of the information transmitted by this email including >> damage from virus. >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > -- > Andy Ning > Cube: 3071 > Tel: 613-9631408 (int: 4408) > Skype: andy.ning.wr > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.ning at windriver.com Wed Jul 31 15:06:45 2019 From: andy.ning at windriver.com (Andy Ning) Date: Wed, 31 Jul 2019 11:06:45 -0400 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In-Reply-To: References: <8E5740EC88EF3E4BA3196F2545DC8625C12A7408@ALA-MBD.corp.ad.wrs.com> Message-ID: <4c2994ae-df32-5ccb-114d-2d8b7f4a2179@windriver.com> So if you source /etc/platform/openrc alread, and then export OS_CLOUD=openstack_helm, openstack will try to talk to the keystone pointed to by OS_AUTH_URL and the command will fail. Using OS_CLOUD=openstack_helm before source /etc/platform/openrc, openstack command work for me Andy On 2019-07-31 10:49 AM, Andy Ning wrote: > > > > On 2019-07-31 10:45 AM, Andy Ning wrote: >> >> >> >> On 2019-07-31 10:40 AM, Gauld, James wrote: >>> >>> Addendum, >>> >>> If I create a modified version of /etc/platform/openrc, and change >>> the following line OS_AUTH_URL, source that file, I can successfully >>> run openstack commands from that shell: >>> >>> export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 >>> >>> Something not working for me with the /etc/openstack/clouds.yaml, >>> and horizon openstack GUI. >>> >> >> I checked a lab with load BUILD_ID="20190729T233000Z", and I don't >> see this file exist. Are you sure it exist in your lab? >> > > Sorry, I realized that you have to create such a file ... > > >> Andy >> >>> -jim >>> >>> *From:*Gauld, James >>> *Sent:* July-31-19 9:55 AM >>> *To:* Bailey, Henry Albert (Al); Anirudh Gupta; >>> starlingx-discuss at lists.starlingx.io; >>> starlingx-announce at lists.starlingx.io >>> *Subject:* RE: [Starlingx-discuss] StarlingX R2.0 Issues against >>> Green Build dated 17Th July 2019 >>> >>> Is anyone having problems with openstack commands on recent build? >>> >>> I installed a build from late last night, I did "system >>> application-apply stx-openstack" and that was successful, and >>> everything seems running. I let it sit for several hours. I >>> created the clouds.yaml as per usual old Wiki. Then cannot >>> continue with the "Verify the cluster endpoints" step. I cannot do >>> "openstack endpoint list". >>> >>> At first I was getting: >>> >>> controller-0:~$ openstack endpoint list >>> >>> The request you have made requires authentication. (HTTP 401) >>> (Request-ID: req-2e791341-d5e0-4064-9962-eeae6115121d) >>> >>> Now I am getting : >>> >>> controller-0:~# export OS_CLOUD=openstack_helm >>> >>> controller-0:~# openstack endpoint list >>> >>> The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. >>> (HTTP 401) (Request-ID: req-c3e92a3d-b9fa-4cce-92bb-a84d79ed79b0) >>> >>> And logging out with fresh environment: >>> >>> controller-0:~$ export OS_CLOUD=openstack_helm >>> >>> controller-0:~$ openstack endpoint list >>> >>> The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. >>> (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) >>> >>> The horizon openstack GUI won't allow me to login, message is >>> "invalid credentials". >>> >>> -jim >>> >>> *From:*Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] >>> *Sent:* July-22-19 10:52 AM >>> *To:* Anirudh Gupta; starlingx-discuss at lists.starlingx.io; >>> starlingx-announce at lists.starlingx.io >>> *Subject:* Re: [Starlingx-discuss] StarlingX R2.0 Issues against >>> Green Build dated 17Th July 2019 >>> >>> In order to see the endpoints for nova, neutron, etc.. you need >>> /etc/openstack/clouds.yaml file to be setup. >>> >>> The steps you need are referenced in this (deprecated) document. >>> >>> https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints >>> >>> It will also show the appropriate commands for the provider/tenant >>> networking setup, until the official doc is synced with that wiki. >>> >>> Al >>> >>> *From:*Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] >>> *Sent:* Friday, July 19, 2019 12:13 AM >>> *To:* starlingx-discuss at lists.starlingx.io; >>> starlingx-announce at lists.starlingx.io >>> *Subject:* [Starlingx-discuss] StarlingX R2.0 Issues against Green >>> Build dated 17Th July 2019 >>> >>> Hi Team, >>> >>> I am following the below document to set up AIO-Simplex R2.0 with >>> the green build dated 17-July. >>> >>> https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html >>> >>> I have successfully verified the endpoints, using the command >>> >>> /openstack endpoint list/ >>> >>> *Issue 1 :- * >>> >>> The endpoint list contains endpoint of the services >>> fm,patching,vim,smapi,keystone,barbacian and sysinv. >>> >>> The other basic openstack services are visible if I run >>> >>> /kubectl get services -n openstack/ >>> >>> But, there are no endpoints of nova,neutron,glance and all other >>> openstack services? >>> >>> *Issue 2 :-* >>> >>> I am unable to proceed further with the set up Provider/tenant >>> networking setup >>> ¶ >>> >>> >>> [root at controller-0 sysadmin(keystone_admin)]# neutron >>> providernet-create ${PHYSNET0} --type vlan >>> >>> neutron CLI is deprecated and will be removed in the future. Use >>> openstack CLI instead. >>> >>> Unknown command [u'providernet-create', u'--type', u'vlan'] >>> >>> What could be the solution to proceed further? >>> >>> Regards >>> >>> Anirudh Gupta >>> >>> DISCLAIMER: This electronic message and all of its contents, >>> contains information which is privileged, confidential or otherwise >>> protected from disclosure. The information contained in this >>> electronic mail transmission is intended for use only by the >>> individual or entity to which it is addressed. If you are not the >>> intended recipient or may have received this electronic mail >>> transmission in error, please notify the sender immediately and >>> delete / destroy all copies of this electronic mail transmission >>> without disclosing, copying, distributing, forwarding, printing or >>> retaining any part of it. Hughes Systique accepts no responsibility >>> for loss or damage arising from the use of the information >>> transmitted by this email including damage from virus. >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> -- >> Andy Ning >> Cube: 3071 >> Tel: 613-9631408 (int: 4408) >> Skype: andy.ning.wr >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > -- > Andy Ning > Cube: 3071 > Tel: 613-9631408 (int: 4408) > Skype: andy.ning.wr > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From Stefan.Dinescu at windriver.com Wed Jul 31 15:18:09 2019 From: Stefan.Dinescu at windriver.com (Dinescu, Stefan) Date: Wed, 31 Jul 2019 15:18:09 +0000 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In-Reply-To: <4c2994ae-df32-5ccb-114d-2d8b7f4a2179@windriver.com> References: <8E5740EC88EF3E4BA3196F2545DC8625C12A7408@ALA-MBD.corp.ad.wrs.com> , <4c2994ae-df32-5ccb-114d-2d8b7f4a2179@windriver.com> Message-ID: Yeah, if you want to use any openstack commands for the containerized/application side you must NOT source the /etc/platform/openrc file at all from the shell, as the expoted variables will interfere with the settings saved in the clouds.yaml file. ________________________________ From: Andy Ning [andy.ning at windriver.com] Sent: Wednesday, July 31, 2019 6:06 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 So if you source /etc/platform/openrc alread, and then export OS_CLOUD=openstack_helm, openstack will try to talk to the keystone pointed to by OS_AUTH_URL and the command will fail. Using OS_CLOUD=openstack_helm before source /etc/platform/openrc, openstack command work for me Andy On 2019-07-31 10:49 AM, Andy Ning wrote: On 2019-07-31 10:45 AM, Andy Ning wrote: On 2019-07-31 10:40 AM, Gauld, James wrote: Addendum, If I create a modified version of /etc/platform/openrc, and change the following line OS_AUTH_URL, source that file, I can successfully run openstack commands from that shell: export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 Something not working for me with the /etc/openstack/clouds.yaml, and horizon openstack GUI. I checked a lab with load BUILD_ID="20190729T233000Z", and I don't see this file exist. Are you sure it exist in your lab? Sorry, I realized that you have to create such a file ... Andy -jim From: Gauld, James Sent: July-31-19 9:55 AM To: Bailey, Henry Albert (Al); Anirudh Gupta; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Is anyone having problems with openstack commands on recent build? I installed a build from late last night, I did "system application-apply stx-openstack" and that was successful, and everything seems running. I let it sit for several hours. I created the clouds.yaml as per usual old Wiki. Then cannot continue with the "Verify the cluster endpoints" step. I cannot do "openstack endpoint list". At first I was getting: controller-0:~$ openstack endpoint list The request you have made requires authentication. (HTTP 401) (Request-ID: req-2e791341-d5e0-4064-9962-eeae6115121d) Now I am getting : controller-0:~# export OS_CLOUD=openstack_helm controller-0:~# openstack endpoint list The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. (HTTP 401) (Request-ID: req-c3e92a3d-b9fa-4cce-92bb-a84d79ed79b0) And logging out with fresh environment: controller-0:~$ export OS_CLOUD=openstack_helm controller-0:~$ openstack endpoint list The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) The horizon openstack GUI won't allow me to login, message is "invalid credentials". -jim From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: July-22-19 10:52 AM To: Anirudh Gupta; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In order to see the endpoints for nova, neutron, etc.. you need /etc/openstack/clouds.yaml file to be setup. The steps you need are referenced in this (deprecated) document. https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints It will also show the appropriate commands for the provider/tenant networking setup, until the official doc is synced with that wiki. Al From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Friday, July 19, 2019 12:13 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Hi Team, I am following the below document to set up AIO-Simplex R2.0 with the green build dated 17-July. https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html I have successfully verified the endpoints, using the command openstack endpoint list Issue 1 :- The endpoint list contains endpoint of the services fm,patching,vim,smapi,keystone,barbacian and sysinv. The other basic openstack services are visible if I run kubectl get services -n openstack But, there are no endpoints of nova,neutron,glance and all other openstack services? Issue 2 :- I am unable to proceed further with the set up Provider/tenant networking setup [root at controller-0 sysadmin(keystone_admin)]# neutron providernet-create ${PHYSNET0} --type vlan neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Unknown command [u'providernet-create', u'--type', u'vlan'] What could be the solution to proceed further? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Wed Jul 31 15:33:44 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Wed, 31 Jul 2019 15:33:44 +0000 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In-Reply-To: References: <8E5740EC88EF3E4BA3196F2545DC8625C12A7408@ALA-MBD.corp.ad.wrs.com> , <4c2994ae-df32-5ccb-114d-2d8b7f4a2179@windriver.com> Message-ID: It says that the admin account in his containerized keystone is "locked" now. The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) What are the steps for Jim to recover? Al From: Dinescu, Stefan [mailto:Stefan.Dinescu at windriver.com] Sent: Wednesday, July 31, 2019 11:18 AM To: Ning, Antai (Andy); starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Yeah, if you want to use any openstack commands for the containerized/application side you must NOT source the /etc/platform/openrc file at all from the shell, as the expoted variables will interfere with the settings saved in the clouds.yaml file. ________________________________ From: Andy Ning [andy.ning at windriver.com] Sent: Wednesday, July 31, 2019 6:06 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 So if you source /etc/platform/openrc alread, and then export OS_CLOUD=openstack_helm, openstack will try to talk to the keystone pointed to by OS_AUTH_URL and the command will fail. Using OS_CLOUD=openstack_helm before source /etc/platform/openrc, openstack command work for me Andy On 2019-07-31 10:49 AM, Andy Ning wrote: On 2019-07-31 10:45 AM, Andy Ning wrote: On 2019-07-31 10:40 AM, Gauld, James wrote: Addendum, If I create a modified version of /etc/platform/openrc, and change the following line OS_AUTH_URL, source that file, I can successfully run openstack commands from that shell: export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 Something not working for me with the /etc/openstack/clouds.yaml, and horizon openstack GUI. I checked a lab with load BUILD_ID="20190729T233000Z", and I don't see this file exist. Are you sure it exist in your lab? Sorry, I realized that you have to create such a file ... Andy -jim From: Gauld, James Sent: July-31-19 9:55 AM To: Bailey, Henry Albert (Al); Anirudh Gupta; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Is anyone having problems with openstack commands on recent build? I installed a build from late last night, I did "system application-apply stx-openstack" and that was successful, and everything seems running. I let it sit for several hours. I created the clouds.yaml as per usual old Wiki. Then cannot continue with the "Verify the cluster endpoints" step. I cannot do "openstack endpoint list". At first I was getting: controller-0:~$ openstack endpoint list The request you have made requires authentication. (HTTP 401) (Request-ID: req-2e791341-d5e0-4064-9962-eeae6115121d) Now I am getting : controller-0:~# export OS_CLOUD=openstack_helm controller-0:~# openstack endpoint list The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. (HTTP 401) (Request-ID: req-c3e92a3d-b9fa-4cce-92bb-a84d79ed79b0) And logging out with fresh environment: controller-0:~$ export OS_CLOUD=openstack_helm controller-0:~$ openstack endpoint list The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) The horizon openstack GUI won't allow me to login, message is "invalid credentials". -jim From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: July-22-19 10:52 AM To: Anirudh Gupta; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In order to see the endpoints for nova, neutron, etc.. you need /etc/openstack/clouds.yaml file to be setup. The steps you need are referenced in this (deprecated) document. https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints It will also show the appropriate commands for the provider/tenant networking setup, until the official doc is synced with that wiki. Al From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Friday, July 19, 2019 12:13 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 Hi Team, I am following the below document to set up AIO-Simplex R2.0 with the green build dated 17-July. https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html I have successfully verified the endpoints, using the command openstack endpoint list Issue 1 :- The endpoint list contains endpoint of the services fm,patching,vim,smapi,keystone,barbacian and sysinv. The other basic openstack services are visible if I run kubectl get services -n openstack But, there are no endpoints of nova,neutron,glance and all other openstack services? Issue 2 :- I am unable to proceed further with the set up Provider/tenant networking setup [root at controller-0 sysadmin(keystone_admin)]# neutron providernet-create ${PHYSNET0} --type vlan neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Unknown command [u'providernet-create', u'--type', u'vlan'] What could be the solution to proceed further? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.ning at windriver.com Wed Jul 31 15:41:11 2019 From: andy.ning at windriver.com (Andy Ning) Date: Wed, 31 Jul 2019 11:41:11 -0400 Subject: [Starlingx-discuss] StarlingX R2.0 Issues against Green Build dated 17Th July 2019 In-Reply-To: References: <8E5740EC88EF3E4BA3196F2545DC8625C12A7408@ALA-MBD.corp.ad.wrs.com> <4c2994ae-df32-5ccb-114d-2d8b7f4a2179@windriver.com> Message-ID: Containerized keystone has the following settings: |lockout_duration = 1800 lockout_failure_attempts = 5 | So after 1800s the locked user should be unlocked. Andy On 2019-07-31 11:33 AM, Bailey, Henry Albert (Al) wrote: > > It says that the admin account in his containerized keystone is > “locked” now. > > The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. > (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) > > What are the steps for Jim to recover? > > Al > > *From:*Dinescu, Stefan [mailto:Stefan.Dinescu at windriver.com] > *Sent:* Wednesday, July 31, 2019 11:18 AM > *To:* Ning, Antai (Andy); starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green > Build dated 17Th July 2019 > > Yeah, if you want to use any openstack commands for the > containerized/application side you must NOT source the > /etc/platform/openrc file at all from the shell, as the expoted > variables will interfere with the settings saved in the clouds.yaml file. > > ------------------------------------------------------------------------ > > *From:*Andy Ning [andy.ning at windriver.com] > *Sent:* Wednesday, July 31, 2019 6:06 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] StarlingX R2.0 Issues against Green > Build dated 17Th July 2019 > > So if you source /etc/platform/openrc alread, and then export > OS_CLOUD=openstack_helm, openstack will try to talk to the keystone > pointed to by OS_AUTH_URL and the command will fail. Using > OS_CLOUD=openstack_helm before source /etc/platform/openrc, openstack > command work for me > > Andy > > On 2019-07-31 10:49 AM, Andy Ning wrote: > > On 2019-07-31 10:45 AM, Andy Ning wrote: > > On 2019-07-31 10:40 AM, Gauld, James wrote: > > Addendum, > > If I create a modified version of /etc/platform/openrc, > and change the following line OS_AUTH_URL, source that > file, I can successfully run openstack commands from that > shell: > > export > OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 > > Something not working for me with the > /etc/openstack/clouds.yaml, and horizon openstack GUI. > > > I checked a lab with load BUILD_ID="20190729T233000Z", and I > don't see this file exist. Are you sure it exist in your lab? > > > Sorry, I realized that you have to create such a file ... > > > > Andy > > > -jim > > *From:*Gauld, James > *Sent:* July-31-19 9:55 AM > *To:* Bailey, Henry Albert (Al); Anirudh Gupta; > starlingx-discuss at lists.starlingx.io > ; > starlingx-announce at lists.starlingx.io > > *Subject:* RE: [Starlingx-discuss] StarlingX R2.0 Issues against > Green Build dated 17Th July 2019 > > Is anyone having problems with openstack commands on recent build? > > I installed a build from late last night, I did "system > application-apply stx-openstack" and that was successful, and > everything seems running. I let it sit for several hours. I > created the clouds.yaml as per usual old Wiki. Then cannot > continue with the "Verify the cluster endpoints" step. I cannot do > "openstack endpoint list". > > At first I was getting: > > controller-0:~$ openstack endpoint list > > The request you have made requires authentication. (HTTP 401) > (Request-ID: req-2e791341-d5e0-4064-9962-eeae6115121d) > > Now I am getting : > > controller-0:~# export OS_CLOUD=openstack_helm > > controller-0:~# openstack endpoint list > > The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. > (HTTP 401) (Request-ID: req-c3e92a3d-b9fa-4cce-92bb-a84d79ed79b0) > > And logging out with fresh environment: > > controller-0:~$ export OS_CLOUD=openstack_helm > > controller-0:~$ openstack endpoint list > > The account is locked for user: 76aca3c3ce404f28a042b366cc2434cd. > (HTTP 401) (Request-ID: req-ff102b9d-9afa-4fbc-b5c0-ec2fd46846bc) > > The horizon openstack GUI won't allow me to login, message is > "invalid credentials". > > -jim > > *From:*Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] > *Sent:* July-22-19 10:52 AM > *To:* Anirudh Gupta; starlingx-discuss at lists.starlingx.io > ; > starlingx-announce at lists.starlingx.io > > *Subject:* Re: [Starlingx-discuss] StarlingX R2.0 Issues against > Green Build dated 17Th July 2019 > > In order to see the endpoints for nova, neutron, etc.. you need > /etc/openstack/clouds.yaml file to be setup. > > The steps you need are referenced in this (deprecated) document. > > https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints > > It will also show the appropriate commands for the provider/tenant > networking setup, until the official doc is synced with that wiki. > > Al > > *From:*Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] > *Sent:* Friday, July 19, 2019 12:13 AM > *To:* starlingx-discuss at lists.starlingx.io > ; > starlingx-announce at lists.starlingx.io > > *Subject:* [Starlingx-discuss] StarlingX R2.0 Issues against Green > Build dated 17Th July 2019 > > Hi Team, > > I am following the below document to set up AIO-Simplex R2.0 with > the green build dated 17-July. > > https://docs.starlingx.io/deploy_install_guides/upcoming/aio_simplex.html > > I have successfully verified the endpoints, using the command > > /openstack endpoint list/ > > *Issue 1 :- * > > The endpoint list contains endpoint of the services > fm,patching,vim,smapi,keystone,barbacian and sysinv. > > The other basic openstack services are visible if I run > > /kubectl get services -n openstack/ > > But, there are no endpoints of nova,neutron,glance and all other > openstack services? > > *Issue 2 :-* > > I am unable to proceed further with the set up Provider/tenant > networking setup > ¶ > > > [root at controller-0 sysadmin(keystone_admin)]# neutron > providernet-create ${PHYSNET0} --type vlan > > neutron CLI is deprecated and will be removed in the future. Use > openstack CLI instead. > > Unknown command [u'providernet-create', u'--type', u'vlan'] > > What could be the solution to proceed further? > > Regards > > Anirudh Gupta > > DISCLAIMER: This electronic message and all of its contents, > contains information which is privileged, confidential or > otherwise protected from disclosure. The information contained in > this electronic mail transmission is intended for use only by the > individual or entity to which it is addressed. If you are not the > intended recipient or may have received this electronic mail > transmission in error, please notify the sender immediately and > delete / destroy all copies of this electronic mail transmission > without disclosing, copying, distributing, forwarding, printing or > retaining any part of it. Hughes Systique accepts no > responsibility for loss or damage arising from the use of the > information transmitted by this email including damage from virus. > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Andy Ning > > Cube: 3071 > > Tel: 613-9631408 (int: 4408) > > Skype: andy.ning.wr > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Andy Ning > > Cube: 3071 > > Tel: 613-9631408 (int: 4408) > > Skype: andy.ning.wr > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > Andy Ning > Cube: 3071 > Tel: 613-9631408 (int: 4408) > Skype: andy.ning.wr -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Jul 31 16:03:04 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 31 Jul 2019 09:03:04 -0700 Subject: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F36002F99@SHSMSX104.ccr.corp.intel.com> References: <55287B28-8AA4-44B3-9DB7-430B5D27B9B2@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F36001A30@SHSMSX104.ccr.corp.intel.com> <72AD03D27224C74982BE13246D75B39739A3F69C@SHSMSX103.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608BCB11@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608BCC84@SHSMSX105.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F36002F99@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi everyone Sorry for the delay. Kunpeng I will try to show you what we did for multi os time ago for our Debian base experiment porting and what we discuss on Monday morning multi os meeting (you are more than welcome to join this meeting where we can discuss all this kind of topics) >From the journey we have had, trying to port starling X to a different OS than the one provided by CENGN are, this is my feedback: 1) If you grab the RPMs from the CENGN and try to install on a vanilla CentOS it might face some runtime dependencies due to different versions of the libraries that starling x is using. is necessary to fix all these installation problems. However, is worth to try first to install all the flock services 2) Hopefully, there will be no need to rebuild, since we are using CentOS 7.6 as a base, however as I mentioned before, Starting X uses some specific versions of the libraries (the reason why to use that specific version of the library is not clear to me yet, but the build system is configured that way) and it might be necessary to rebuild them. I was working on fixing missing build requirements a few months ago, in order to have all the build req complete and being able to build using plain mock. 4) When running the services you will realize they don't use systemd as standard, however, is possible to as long as all the runtime requirements are fulfilled. 5) As mentioned before, the kernel has many patches that are applied on the top of the SRPM. After an analysis that Mario Carrillo, Saul, and Abraham did months ago it was possible to reduce the patches to only 5 when we port to Debian kernel LTS (4.15). These ones are documented on the https://github.com/starlingx-staging/stx-packaging repository: 0001-StarlingX-Death-of-Arbitrary-Process-Notification.patch 0002-StarlingX-Kernel-Threads-Compute-CPU-Affinity.patch 0003-StarlingX-Kernel-Threads-Workqueues-IRQs.patch 0004-StarlingX-Kernel-Threads-iSCSI.patch ( the other patches are for performance or are already applied on the LTS 4.15 kernel ) After having the kernel and flock services compiled and installed on the vanilla CentOS, is necessary to install the starling X version of other components of the OS such as bash and systemd, packages that have patches for starling X After that, we would be able to start to test that each flock service works as expected. We are working on the development of white-box tests for the flock services, but in the meantime, we are learning of the intrinsics of the code to understand what to test. It has been a long journey to arrive here, but we hope that our findings could help you. In the multi os team, we keep working to make this possible, we are short of hands but if you are interested to join we would be more than welcome. Regards Victor Rodriguez On Wed, Jul 31, 2019 at 5:32 AM Xie, Cindy wrote: > > Kernel-rt: real time Linux kernel; > Kernel-std: standard kernel. > > -----Original Message----- > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Wednesday, July 31, 2019 5:19 PM > To: Lin, Shuicheng > Cc: Chen, Yan ; Xie, Cindy ; Victor Rodriguez ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > Emmmmmmm, I think it’s hard for me to study these so many kernel patches. > By the way, I have a naive question that what are the meanings of "kernel-rt" and "kernel-std”. > > Thanks > Kunpeng > > > On Jul 31, 2019, at 14:32, Lin, Shuicheng wrote: > > > > Hi Kunpeng, > > You could find kernel patches in below link. Most of them are performance/bug related. > > https://opendev.org/starlingx/integ/src/branch/master/kernel/kernel-st > > d/centos/patches Sorry, but I don’t have data which flock services > > will be affected by it. > > > > > > Best Regards > > Shuicheng > > > > > > -----Original Message----- > > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > > Sent: Wednesday, July 31, 2019 11:44 AM > > To: Lin, Shuicheng ; Chen, Yan > > ; Xie, Cindy ; Victor > > Rodriguez > > Cc: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > > > > What are the differences between StarlingX kernel and standard CentOS kernel? On the other hand, which flock services are affected? > > I mean, is the StarlingX complete if those based services(eg,docker/libvirt/ceph) and flock services(eg, fm) are installed and launched, and the risk is that the flock services cannot be launched well in standard kernel? > > > > Thanks > > Kunpeng > > > >> On Jul 31, 2019, at 10:21, Lin, Shuicheng wrote: > >> > >> My concern is that, the CentOS kernel used in StarlingX is customized, it is not the same as the CentOS. > >> So some feature/component may not work with a standard CentOS system. > >> > >> Best Regards > >> Shuicheng > >> > >> -----Original Message----- > >> From: Chen, Yan [mailto:yan.chen at intel.com] > >> Sent: Wednesday, July 31, 2019 9:39 AM > >> To: Xie, Cindy ; 张鲲鹏 > >> ; Victor Rodriguez > >> Cc: starlingx-discuss at lists.starlingx.io > >> Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > >> > >> No, I think a ready CentOS means a standard CentOS system, which is not installed from StarlingX ISO. > >> We didn’t try to install StarlingX on such a system. > >> > >> > >> Yan > >> > >> -----Original Message----- > >> From: Xie, Cindy > >> Sent: Tuesday, July 30, 2019 20:52 > >> To: 张鲲鹏 ; Victor Rodriguez > >> ; Chen, Yan > >> Cc: starlingx-discuss at lists.starlingx.io > >> Subject: RE: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > >> > >> I think it's doable. @Yan, did you ever try to run the provision scripts on a ready CentOS bare-metal? > >> > >> Thx. - cindy > >> > >> -----Original Message----- > >> From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > >> Sent: Tuesday, July 30, 2019 6:10 PM > >> To: Victor Rodriguez > >> Cc: starlingx-discuss at lists.starlingx.io > >> Subject: Re: [Starlingx-discuss] [starlingx-discuss][devstack/deploy]How to deploy starlingx in a ready centos system? > >> > >> Hi Victor, > >> > >> I haven’t started yet, because I don’t know how to do it. I'm glad you are performing this experiments, could you share your experience or documents? You know, booting from iso isn’t a well deployment in some cases. > >> Recently, I studied the kickstart configs in bootimage.iso, and my idea is to operate the ready system according to the ks.cfg, but I’m not sure if it’s a right way. > >> > >> Thanks > >> Kunpeng > >> > >>> On Jul 29, 2019, at 20:13, Victor Rodriguez wrote: > >>> > >>> Hi Kunpeng > >>> > >>> In the MultiOS subproject, we are performing such experiments, at > >>> the moment with Open SUSE. We are facing some technical problems > >>> with the runtime dependencies and the way the services start. Can > >>> you please describe the steps you are following and what kind of > >>> specific problems do you have? > >>> > >>> Thanks > >>> > >>> Victor Rodriguez > >>> > >>> > >>> On Mon, Jul 29, 2019 at 1:48 AM 张鲲鹏 wrote: > >>>> > >>>> Hi all, > >>>> > >>>> Are there anyone to try install StarlingX in a ready centos system or others systems? It looks like to deploy devstack. > >>>> > >>>> Thanks > >>>> Kunpeng > >>>> _______________________________________________ > >>>> Starlingx-discuss mailing list > >>>> Starlingx-discuss at lists.starlingx.io > >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > >>>> s > >>> > >>> _______________________________________________ > >>> Starlingx-discuss mailing list > >>> Starlingx-discuss at lists.starlingx.io > >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > >> > >> > >> _______________________________________________ > >> Starlingx-discuss mailing list > >> Starlingx-discuss at lists.starlingx.io > >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > >> _______________________________________________ > >> Starlingx-discuss mailing list > >> Starlingx-discuss at lists.starlingx.io > >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > >> _______________________________________________ > >> Starlingx-discuss mailing list > >> Starlingx-discuss at lists.starlingx.io > >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From ildiko.vancsa at gmail.com Wed Jul 31 18:41:18 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 31 Jul 2019 20:41:18 +0200 Subject: [Starlingx-discuss] Release 2.0 feature list and updated overview slides Message-ID: Hi StarlingX Community, I’m reaching out to you to collect content for communications about the upcoming 2.0 release. If you added a feature to the project that will be released at the end of the month that you would like to highlight please add it to the release 2.0 content section of this etherpad: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans If there is documentation already existing please link that to the etherpad as well so we can point to further details. Similarly if you’re interested in writing up blog posts on any of the new items in release 2.0 please upload content in a new pull request on GitHub or reach out to me with content and I can help you getting it uploaded to the website: https://github.com/StarlingXWeb/starlingx-website/tree/master/site/blog There’s also an updated version of the StarlingX overview slide deck uploaded for review. Please check it out and reply to this thread or on the GitHub issue if you have feedback to the new content: https://github.com/StarlingXWeb/starlingx-website/issues/39 Please let me know if you have any questions. Thanks, Ildikó From Bill.Zvonar at windriver.com Wed Jul 31 19:25:39 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 31 Jul 2019 19:25:39 +0000 Subject: [Starlingx-discuss] Community Call (July 31, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ABE9EE@ALA-MBD.corp.ad.wrs.com> >From today's meeting... - sanity - any red sanities since last Community meeting - green so far this week - per Ghada, the red sanity last week was due to the allocation of huge pages causing OVS-DPDK failures - it's intermittent but recently increasing - The sanity was caused by https://bugs.launchpad.net/starlingx/+bug/1838031 and https://bugs.launchpad.net/starlingx/+bug/1837936 -- both are duplicates of https://bugs.launchpad.net/bugs/1829403 - A fix was submitted as of July 29 - reviews in need of attention - nothing this week - defect trend / gating launchpads: https://docs.google.com/spreadsheets/d/1DZZgqrCIL6wxv51_yFBk6Lfmtf1AqPD6z7e5hEs3prU/edit#gid=1694187926 - will continue to focus on getting plans in place for the High importance bugs - RC1 next week (Ghada) - Logistics/Mechanics to be discussed in the release meeting team. >> Scott, Dean, Don, please attend - Community should expect that merges will be halted on master for a short period while the branch is being created. - Test deliverables for the milestone are still open. Need discuss exception or moving branch creation date - Feature Test: 33 TCs not run yet, 9 blocked, 12 deferred - Regression Test: 39 TCs not run, 30 blocked, 6 deferred - will discuss in the Release Team meeting tomorrow - Elio mentioned that they have some L2 issues - Ghada advised him to get Forrest involved (Networking PL) and/or send questions to the mailing list - Doc update (Mike T) - Review of gating doc stories -- https://storyboard.openstack.org/#!/story/list?tags=stx.docs&tags=stx.2.0 - Most already closed. Currently 7 remaining open with 4 well in progress with pending reviews: - Wiki migration status -- See https://docs.google.com/spreadsheets/d/1UJjUttsWQRyauATrip0wKGIxSO7DvyetDKmPwvEIDaA - Bug status -- https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs - Bitergia update: https://etherpad.openstack.org/p/stx-bitergia - asked folks to check the "individual contributor" dashboard: https://starlingx.biterg.io/goto/f8ba64af7d170d0b3d20b820bd9aac77 - PL/TL elections coming up in September - no comments - IPv6 questions from Elio - he'll put his questions up on the mailing list - Release 2.0 communication preparations (ildikov) - Blog post topics and authors - Features to highlight - https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans - OpenStack fork status - New overview slide deck: https://github.com/StarlingXWeb/starlingx-website/files/3392060/StarlingX.Onboarding.Deck.for.Web.July.2019.pdf - updates on open actions - ACTION: Yong to propose how we could formalize the process of assessing the impact of a bug from different perspectives - Yong to send his thoughts to the mailing list or here - some practices - ask the questions to evaluate 1). the importance (how much it impacts to users), 2). severity (how bad the system gets hurt) of an issue. 3). what kind of users (admin or normal end-users) would be impacted? - Does the issue lead to a major feature (mostly visible to users) misfunctional or missing? for example, VMs cannot be created, or Pods fail to run. - Does the issue impact the user experience? For example, information in Horizon dashboard is not correct or out of date, but users can still use the functionality to operate the system. - Does the issue impact usability over time? for example, memory leaking issues. - Does the issue cause a measurable performance drop or an obvious regression, compared to a prior version? - Do we have workarounds when the issue occurs? Workaround means some reasonable manual operations to users, and it doesn't include the hacking ways. - Is the issue 100% reproducible or just randmon occuring? - Does the issue blocks the rest of testing once it happens? - Is this issue confirmed by an upstream issue? Is there a solution for that upstream issue? - ACTION: Ghada to add a sub-section about Workarounds to the bug template - ACTION: release team make the recommendation re: Blueprints for Backlog in the next TSC meeting - pending - presented at TSC meeting last week, not closed yet - ACTION: Frank to talk to CENGN about getting sufficient space (pending any other parameters from Scott) - CENGN to get back to Frank on this - re: feasability & cost - they're setting up a PoC for us, no firm date - ACTION: Numan & Ada to sort out how aggregate regression reporting will be done (manual & automated) - they have booked a meeting to discuss - done - they're also working on harmonizing their Sanity reports - one common set of tests being run by the GDC/Ottawa test teams - it'll be sent to the Community for review - ACTION: Numan/Yang arrange an automation framework info session for the Community (in a few weeks after Yang's vacation) - Numan is setting this up w/ Yang - details tbd by next week's Community call -----Original Message----- From: Zvonar, Bill Sent: Tuesday, July 30, 2019 9:29 AM To: starlingx-discuss at lists.starlingx.io Subject: Community Call (July 31, 2019) Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - sanity - any red sanities since last Community meeting? - reviews in need of attention - defect trend / gating launchpads - stx.2.0 RC1 is next week - documentation update - bitergia update: see https://etherpad.openstack.org/p/stx-bitergia - open actions from previous meetings Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190731T1400 From Robert.Church at windriver.com Wed Jul 31 19:29:05 2019 From: Robert.Church at windriver.com (Church, Robert) Date: Wed, 31 Jul 2019 19:29:05 +0000 Subject: [Starlingx-discuss] stx-openstack chart optionality In-Reply-To: <3E093B33-2DBF-4378-B37C-D61EF58BDC77@windriver.com> References: <3E093B33-2DBF-4378-B37C-D61EF58BDC77@windriver.com> Message-ID: <62725E2D-E9C4-4C79-A197-AFACF4FF7F78@windriver.com> https://review.opendev.org/#/c/671950 has just merged. This is a reminder. Upon upload of stx-openstack you can see which charts are enabled with: [sysadmin at controller-0 ~(keystone_admin)]$ system helm-override-list stx-openstack --long +---------------------+--------------------------------+---------------+ | chart name | overrides namespaces | chart enabled | +---------------------+--------------------------------+---------------+ | aodh | [u'openstack'] | [False] | | barbican | [u'openstack'] | [False] | | ceilometer | [u'openstack'] | [False] | | ceph-rgw | [u'openstack'] | [False] | | cinder | [u'openstack'] | [True] | | garbd | [u'openstack'] | [True] | | glance | [u'openstack'] | [True] | | gnocchi | [u'openstack'] | [False] | | heat | [u'openstack'] | [True] | | helm-toolkit | [] | [] | | horizon | [u'openstack'] | [True] | | ingress | [u'kube-system', u'openstack'] | [True, True] | | ironic | [u'openstack'] | [False] | | keystone | [u'openstack'] | [True] | | keystone-api-proxy | [u'openstack'] | [True] | | libvirt | [u'openstack'] | [True] | | mariadb | [u'openstack'] | [True] | | memcached | [u'openstack'] | [True] | | neutron | [u'openstack'] | [True] | | nginx-ports-control | [] | [] | | nova | [u'openstack'] | [True] | | nova-api-proxy | [u'openstack'] | [True] | | openvswitch | [u'openstack'] | [True] | | panko | [u'openstack'] | [False] | | placement | [u'openstack'] | [True] | | rabbitmq | [u'openstack'] | [True] | | version_check | [] | [] | +---------------------+--------------------------------+---------------+ Then enable/disable a specific chart with: [sysadmin at controller-0 ~(keystone_admin)]$ system helm-chart-attribute-modify stx-openstack barbican openstack --enabled=true +------------+--------------------+ | Property | Value | +------------+--------------------+ | attributes | {u'enabled': True} | | name | barbican | | namespace | openstack | +------------+--------------------+ Regards, Bob After uploading stx-openstack, you can see the On 7/22/19, 1:01 AM, "Church, Robert" wrote: Here's an update with regards to behavioral changes for optional charts/services. Current behavior: ----------------- With commit https://opendev.org/starlingx/config/commit/e6b177eb93f85b5a4e53242214060c97728e2048, Barbican and the Telemetry services (aodh, gnocchi, ceilometer, panko) are disabled by default. To enable these services, with the current builds, you must introduce a label to a host as follows: * system host-label-assign controller-0 openstack-barbican=enabled * system host-label-assign controller-0 openstack-telemetry=enabled This follows the existing pattern established for enabling ironic. Future behavior: ---------------- It should be noted that this behavior is transitional as I have up for review: https://review.opendev.org/#/c/671950/. With this update, each chart within an application can be enabled/disabled from the command line prior to application apply. Again, by default: The following stx-openstack charts will be disabled on application upload per the metadata packaged with the application: disabled_charts: - aodh - barbican - ceilometer - gnocchi - ironic - panko The current enablement state of a chart can be seen with: $ system helm-override-show stx-openstack aodh openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack barbican openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack ceilometer openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack gnocchi openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack ironic openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack panko openstack | grep enabled | attributes | enabled: false | and a chart can be enabled/disabled with: $ system help helm-chart-modify usage: system helm-chart-modify [--enabled ] Modify helm chart attributes. This function is provided to modify system behaviorial attributes related to a chart. Chart overrides are not managed through this command. Positional arguments: Name of the application Name of the chart Namespace of the chart Optional arguments: --enabled Chart enabled. $ system helm-chart-modify stx-openstack barbican openstack --enable=true +------------------+--------------------+ | Property | Value | +------------------+--------------------+ | name | barbican | | namespace | openstack | | system_overrides | {u'enabled': True} | +------------------+--------------------+ $ system helm-override-show stx-openstack aodh openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack barbican openstack | grep enabled | attributes | enabled: true | $ system helm-override-show stx-openstack ceilometer openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack gnocchi openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack ironic openstack | grep enabled | attributes | enabled: false | $ system helm-override-show stx-openstack panko openstack | grep enabled | attributes | enabled: false | When a chart is disabled, it is removed dynamically removed from its chart group via the application's Armada manifest operator during override generation. When a chart is enabled, additional system critera may be applied by a chart plugin to disable the chart if a specific system configuration is not met. Thanks, Bob From chenjie.xu at intel.com Wed Jul 31 05:34:38 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 31 Jul 2019 05:34:38 +0000 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Message-ID: Hi Kunpeng, I can’t reproduce this bug on stx 2.0. I have tried pass through physical NIC and VF to the VM. Reboot the VM won’t cause ovs-vswitchd restart and other VMs won’t be affected. Maybe you can use stx 2.0 instead of stx 1.0 based on that stx 2.0 will be released in the near future. Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Monday, July 22, 2019 2:29 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, Actually the syslog logged nothing when I restart the VM. And also the ovs-vswitchd didn’t log the reason why the ovs restarted, so it is difficult to debug. I don’t know if the stx 2.0 will reproduce this bug, but stx 1.0 can be reproduced stably. Thanks Kunpeng On Jul 22, 2019, at 11:31, Xu, Chenjie > wrote: Hi Kunpeng, Sorry for not seeing logs.rar. The following logs in openvswitch/ovs-vswitchd.log show that ovs-vswitchd is restarted but doesn’t show why ovs-vswitchd is restarted: 2019-07-18T12:29:59.948Z|00286|connmgr|INFO|br-phy0<->unix#9: 1 flow_mods in the last 0 s (1 adds) 2019-07-19T02:04:11.973Z|00151|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE 2019-07-19T02:04:21.273Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2019-07-19T02:04:21.277Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 2019-07-19T02:04:21.277Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 2019-07-19T02:04:21.277Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores 2019-07-19T02:04:21.277Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2019-07-19T02:04:21.277Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2019-07-19T02:04:21.279Z|00007|dpdk|INFO|Using DPDK 17.11.0 2019-07-19T02:04:21.279Z|00008|dpdk|INFO|DPDK Enabled - initializing... The syslog doesn’t contain the logs for 2019-07-19. Could you please collect those part log? I will try to reproduce this bug on StarlingX 2.0 and will let you know the result. Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Monday, July 22, 2019 10:27 AM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, I have attached those logs last email, I don’t know why you didn’t get them. I will attach it(logs.tar) again, if you cannot find it, tell me in time. Thanks Kunpeng On Jul 19, 2019, at 16:17, Xu, Chenjie > wrote: Hi Kunpeng, From the below logs, we can find that 1. ovs agent detects that the OVS is dead. 2. After OVS has been restarted, ovs agent tries to reset bridges and recover ports. 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. Could you please attach the below logs? /var/log/openvswitch/ovs-vswitchd.log /var/log/openvswitch/ovsdb-server.log /var/log/syslog neutron log (the log file is specified in /etc/neutron/neutron.conf) Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Friday, July 19, 2019 10:21 AM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, This is the logs. At UTC 2019:02:04 I restarted the VM. In openstack.log I found some error messages, I don’t know if it’s relevant. 2019-07-19 02:04:16.141 186477 INFO eventlet.wsgi.server [req-6600ca74-1f93-4e54-88c1-35f964f1e055 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0178909 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json]: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:22.077 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.079 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:22.089 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.090 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out: Timeout: 10 seconds 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Failed to communicate with the switch: RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback (most recent call last): 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int flows = self.dump_flows(constants.CANARY_TABLE) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 147, in dump_flows 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int reply_multi=True) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 95, in _send_msg 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int raise RuntimeError(m) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. 2019-07-19 02:04:24.242 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Mapping physical network providernet-a to bridge br-phy0 2019-07-19 02:04:24.295 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Bridge br-phy0 has datapath-ID 0000f8f21e640120 Kunpeng On Jul 19, 2019, at 09:29, Xu, Chenjie > wrote: Hi Kunpeng, You can check the bridge and openflows by the following commands: ovs-vsctl show ovs-ofctl dump-flows br-int ovs-ofctl dump-flows br-phy0 The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs. I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted? /var/log/openvswitch/ovs-vswitchd.log /var/log/syslog neutron log (the log file is specified in /etc/neutron/neutron.conf) Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Thursday, July 18, 2019 7:09 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. Before restart ovs: controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever After: controller-0:/home/wrsroot# systemctl restart ovs-vswitchd controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff One of the VMs: On Jul 18, 2019, at 18:09, 张鲲鹏 > wrote: Hi Xu,Chenjie I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. Also, it was same when I reboot the VM with 2 SR-IOV VFs. Do you have any ideas to debug this problem? Thanks Kunpeng On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: Hi Kunpeng, Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: system modify --vswitch_type none Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 5:40 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. And do you have some easy ways to disable dpdk of openvswitch in stx1.0? I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. Thanks Kunpeng On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: Hi Kunpeng, When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: 2 network port without DPDK > VM 2 network port with DPDK > Data Network 1 network port without DPDK > OAM Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 3:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi guys, Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. Below is the StarlingX environment. 1. stx1.0 version, bootimage[1] 2. Simplex deployment 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso Kunpeng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Wed Jul 31 06:16:48 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Wed, 31 Jul 2019 14:16:48 +0800 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> Message-ID: <984CF134-4824-4963-8652-C3CF5AE19D34@99cloud.net> Hi Chenjie, Thanks for your attention. I will try it in stx2.0. Kunpeng > On Jul 31, 2019, at 13:34, Xu, Chenjie wrote: > > Hi Kunpeng, > I can’t reproduce this bug on stx 2.0. I have tried pass through physical NIC and VF to the VM. Reboot the VM won’t cause ovs-vswitchd restart and other VMs won’t be affected. Maybe you can use stx 2.0 instead of stx 1.0 based on that stx 2.0 will be released in the near future. > > Best Regards, > Xu, Chenjie >   <> > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Monday, July 22, 2019 2:29 PM > To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > > Actually the syslog logged nothing when I restart the VM. And also the ovs-vswitchd didn’t log the reason why the ovs restarted, so it is difficult to debug. > I don’t know if the stx 2.0 will reproduce this bug, but stx 1.0 can be reproduced stably. > > Thanks > Kunpeng > > > On Jul 22, 2019, at 11:31, Xu, Chenjie > wrote: > > Hi Kunpeng, > Sorry for not seeing logs.rar. The following logs in openvswitch/ovs-vswitchd.log show that ovs-vswitchd is restarted but doesn’t show why ovs-vswitchd is restarted: > 2019-07-18T12:29:59.948Z|00286|connmgr|INFO|br-phy0<->unix#9: 1 flow_mods in the last 0 s (1 adds) > 2019-07-19T02:04:11.973Z|00151|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE > 2019-07-19T02:04:21.273Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log > 2019-07-19T02:04:21.277Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 > 2019-07-19T02:04:21.277Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 > 2019-07-19T02:04:21.277Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores > 2019-07-19T02:04:21.277Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... > 2019-07-19T02:04:21.277Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected > 2019-07-19T02:04:21.279Z|00007|dpdk|INFO|Using DPDK 17.11.0 > 2019-07-19T02:04:21.279Z|00008|dpdk|INFO|DPDK Enabled - initializing... > > The syslog doesn’t contain the logs for 2019-07-19. Could you please collect those part log? > > I will try to reproduce this bug on StarlingX 2.0 and will let you know the result. > > Best Regards, > Xu, Chenjie > > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Monday, July 22, 2019 10:27 AM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > I have attached those logs last email, I don’t know why you didn’t get them. I will attach it(logs.tar) again, if you cannot find it, tell me in time. > > Thanks > Kunpeng > > > > On Jul 19, 2019, at 16:17, Xu, Chenjie > wrote: > > Hi Kunpeng, > From the below logs, we can find that > 1. ovs agent detects that the OVS is dead. > 2. After OVS has been restarted, ovs agent tries to reset bridges and recover ports. > 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. <> > 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 > 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. > > Could you please attach the below logs? > /var/log/openvswitch/ovs-vswitchd.log > /var/log/openvswitch/ovsdb-server.log > /var/log/syslog > neutron log (the log file is specified in /etc/neutron/neutron.conf) > > Best Regards, > Xu, Chenjie > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Friday, July 19, 2019 10:21 AM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > > This is the logs. At UTC 2019:02:04 I restarted the VM. In openstack.log I found some error messages, I don’t know if it’s relevant. > > 2019-07-19 02:04:16.141 186477 INFO eventlet.wsgi.server [req-6600ca74-1f93-4e54-88c1-35f964f1e055 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0178909 > 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json]: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) > 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) > 2019-07-19 02:04:22.077 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused > 2019-07-19 02:04:22.079 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) > 2019-07-19 02:04:22.089 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused > 2019-07-19 02:04:22.090 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out: Timeout: 10 seconds > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Failed to communicate with the switch: RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback (most recent call last): > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int flows = self.dump_flows(constants.CANARY_TABLE) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 147, in dump_flows > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int reply_multi=True) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 95, in _send_msg > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int raise RuntimeError(m) > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out > 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int > 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. > 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 > 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. > 2019-07-19 02:04:24.242 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Mapping physical network providernet-a to bridge br-phy0 > 2019-07-19 02:04:24.295 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Bridge br-phy0 has datapath-ID 0000f8f21e640120 > > Kunpeng > > > On Jul 19, 2019, at 09:29, Xu, Chenjie > wrote: > > Hi Kunpeng, > You can check the bridge and openflows by the following commands: > ovs-vsctl show > ovs-ofctl dump-flows br-int <> > ovs-ofctl dump-flows br-phy0 > > The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs. > > I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted? > /var/log/openvswitch/ovs-vswitchd.log > /var/log/syslog > neutron log (the log file is specified in /etc/neutron/neutron.conf) > > Best Regards, > Xu, Chenjie > > <>From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Thursday, July 18, 2019 7:09 PM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. > > Before restart ovs: > > controller-0:/home/wrsroot# ip a > 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet 127.168.204.3/24 brd 127.168.204.255 scope host lo > valid_lft forever preferred_lft forever > inet 169.254.202.2/24 scope global lo > valid_lft forever preferred_lft forever > inet 127.168.204.2/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.5/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.6/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.152/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff > 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff > inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 > valid_lft forever preferred_lft forever > inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link > valid_lft forever preferred_lft forever > 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff > 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff > 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff > 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8060/64 scope link > valid_lft forever preferred_lft forever > 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8061/64 scope link > valid_lft forever preferred_lft forever > 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff > 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff > 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:120/64 scope link > valid_lft forever preferred_lft forever > 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 > link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff > inet6 fe80::7c3a:87ff:fea3:9803/64 scope link > valid_lft forever preferred_lft forever > > After: > > controller-0:/home/wrsroot# systemctl restart ovs-vswitchd > controller-0:/home/wrsroot# ip a > 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet 127.168.204.3/24 brd 127.168.204.255 scope host lo > valid_lft forever preferred_lft forever > inet 169.254.202.2/24 scope global lo > valid_lft forever preferred_lft forever > inet 127.168.204.2/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.5/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.6/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet 127.168.204.152/24 scope host secondary lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff > 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff > inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 > valid_lft forever preferred_lft forever > inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link > valid_lft forever preferred_lft forever > 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff > 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff > 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff > 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8060/64 scope link > valid_lft forever preferred_lft forever > 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 > link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:8061/64 scope link > valid_lft forever preferred_lft forever > 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff > 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff > 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 > link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff > inet6 fe80::faf2:1eff:fe64:120/64 scope link > valid_lft forever preferred_lft forever > 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 > link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff > inet6 fe80::7c3a:87ff:fea3:9803/64 scope link > valid_lft forever preferred_lft forever > 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff > 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 > link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff > > One of the VMs: > > > > > On Jul 18, 2019, at 18:09, 张鲲鹏 > wrote: > > Hi Xu,Chenjie > > I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. > Also, it was same when I reboot the VM with 2 SR-IOV VFs. > Do you have any ideas to debug this problem? > > Thanks > Kunpeng > > > On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: > > Hi Kunpeng, > Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. > > Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: > system modify --vswitch_type none > > Best Regards, > Xu, Chenjie > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Tuesday, July 16, 2019 5:40 PM > To: Xu, Chenjie > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi Chenjie, > > Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. > And do you have some easy ways to disable dpdk of openvswitch in stx1.0? > I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. > > Thanks > Kunpeng > > On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: > > Hi Kunpeng, > When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. > > Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: > 2 network port without DPDK > VM > 2 network port with DPDK > Data Network > 1 network port without DPDK > OAM > > Best Regards, > Xu, Chenjie > > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net ] > Sent: Tuesday, July 16, 2019 3:54 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart > > Hi guys, > > Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. > > Below is the StarlingX environment. > > 1. stx1.0 version, bootimage[1] > 2. Simplex deployment > 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. > 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. > > When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. > > It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. > > I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. > > [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso > > Kunpeng > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Akshay.346 at hsc.com Wed Jul 31 07:42:48 2019 From: Akshay.346 at hsc.com (Akshay 346) Date: Wed, 31 Jul 2019 07:42:48 +0000 Subject: [Starlingx-discuss] Query about adding a new OpenStack service to StarlingX Message-ID: Hello Team, I hope you all are doing good. I need to ask that can I install any other OpenStack service ( like zun or any other which is not installed ) on StarlingX 18.10 ? Please guide me if there is a way to add any additional OpenStack service to StarlingX 18.10 release. Best Regards, [cid:image001.jpg at 01D46BAD.8F199640] DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3428 bytes Desc: image001.jpg URL: From chenjie.xu at intel.com Wed Jul 31 08:37:39 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 31 Jul 2019 08:37:39 +0000 Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart In-Reply-To: <984CF134-4824-4963-8652-C3CF5AE19D34@99cloud.net> References: <9A27E2D2-A20A-42D9-8A57-AB9035ED84AE@99cloud.net> <39E54470-30B3-409B-940D-C1156D48E354@99cloud.net> <8FF52DF2-29BF-4CF5-91C6-8DF60FE4A51B@99cloud.net> <984CF134-4824-4963-8652-C3CF5AE19D34@99cloud.net> Message-ID: No problem! Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Wednesday, July 31, 2019 2:17 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, Thanks for your attention. I will try it in stx2.0. Kunpeng On Jul 31, 2019, at 13:34, Xu, Chenjie > wrote: Hi Kunpeng, I can’t reproduce this bug on stx 2.0. I have tried pass through physical NIC and VF to the VM. Reboot the VM won’t cause ovs-vswitchd restart and other VMs won’t be affected. Maybe you can use stx 2.0 instead of stx 1.0 based on that stx 2.0 will be released in the near future. Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Monday, July 22, 2019 2:29 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, Actually the syslog logged nothing when I restart the VM. And also the ovs-vswitchd didn’t log the reason why the ovs restarted, so it is difficult to debug. I don’t know if the stx 2.0 will reproduce this bug, but stx 1.0 can be reproduced stably. Thanks Kunpeng On Jul 22, 2019, at 11:31, Xu, Chenjie > wrote: Hi Kunpeng, Sorry for not seeing logs.rar. The following logs in openvswitch/ovs-vswitchd.log show that ovs-vswitchd is restarted but doesn’t show why ovs-vswitchd is restarted: 2019-07-18T12:29:59.948Z|00286|connmgr|INFO|br-phy0<->unix#9: 1 flow_mods in the last 0 s (1 adds) 2019-07-19T02:04:11.973Z|00151|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE 2019-07-19T02:04:21.273Z|00001|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2019-07-19T02:04:21.277Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 2019-07-19T02:04:21.277Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 2019-07-19T02:04:21.277Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores 2019-07-19T02:04:21.277Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2019-07-19T02:04:21.277Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2019-07-19T02:04:21.279Z|00007|dpdk|INFO|Using DPDK 17.11.0 2019-07-19T02:04:21.279Z|00008|dpdk|INFO|DPDK Enabled - initializing... The syslog doesn’t contain the logs for 2019-07-19. Could you please collect those part log? I will try to reproduce this bug on StarlingX 2.0 and will let you know the result. Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Monday, July 22, 2019 10:27 AM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, I have attached those logs last email, I don’t know why you didn’t get them. I will attach it(logs.tar) again, if you cannot find it, tell me in time. Thanks Kunpeng On Jul 19, 2019, at 16:17, Xu, Chenjie > wrote: Hi Kunpeng, From the below logs, we can find that 1. ovs agent detects that the OVS is dead. 2. After OVS has been restarted, ovs agent tries to reset bridges and recover ports. 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. Could you please attach the below logs? /var/log/openvswitch/ovs-vswitchd.log /var/log/openvswitch/ovsdb-server.log /var/log/syslog neutron log (the log file is specified in /etc/neutron/neutron.conf) Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Friday, July 19, 2019 10:21 AM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, This is the logs. At UTC 2019:02:04 I restarted the VM. In openstack.log I found some error messages, I don’t know if it’s relevant. 2019-07-19 02:04:16.141 186477 INFO eventlet.wsgi.server [req-6600ca74-1f93-4e54-88c1-35f964f1e055 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0178909 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Error received from [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json]: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:21.077 243655 ERROR neutron.agent.linux.async_process [-] Process [ovsdb-client monitor tcp:127.0.0.1:6639 Interface name,ofport,external_ids --format=json] dies due to the error: ovsdb-client: tcp:127.0.0.1:6639: receive failed (End of file) 2019-07-19 02:04:22.077 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.079 243655 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:22.089 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: send error: Connection refused 2019-07-19 02:04:22.090 243737 WARNING ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6639: connection dropped (Connection refused) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out: Timeout: 10 seconds 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Failed to communicate with the switch: RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int Traceback (most recent call last): 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py", line 52, in check_canary_table 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int flows = self.dump_flows(constants.CANARY_TABLE) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 147, in dump_flows 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int reply_multi=True) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int File "/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py", line 95, in _send_msg 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int raise RuntimeError(m) 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int RuntimeError: ofctl request version=0x4,msg_type=0x12,msg_len=0x38,xid=0xb04757ee,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1) timed out 2019-07-19 02:04:23.171 243655 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.br_int 2019-07-19 02:04:23.188 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is dead. OVSNeutronAgent will keep running and checking OVS status periodically. 2019-07-19 02:04:23.401 186476 INFO eventlet.wsgi.server [req-fb780aa9-92a7-4cee-8195-fa34b5d7b0e0 2798eb7d8ca94c3eb4c134eb47bca7ea cea798d27ac44ca8b871877fd2adfeea default - -] 127.168.204.3,127.168.204.3 "GET /v1/alarms/summary HTTP/1.1" status: 200 len: 306 time: 0.0467389 2019-07-19 02:04:24.190 243655 WARNING neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] OVS is restarted. OVSNeutronAgent will reset bridges and recover ports. 2019-07-19 02:04:24.242 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Mapping physical network providernet-a to bridge br-phy0 2019-07-19 02:04:24.295 243655 INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [req-8c7be001-d68e-4d45-929e-50017d044bb0 - - - - -] Bridge br-phy0 has datapath-ID 0000f8f21e640120 Kunpeng On Jul 19, 2019, at 09:29, Xu, Chenjie > wrote: Hi Kunpeng, You can check the bridge and openflows by the following commands: ovs-vsctl show ovs-ofctl dump-flows br-int ovs-ofctl dump-flows br-phy0 The virtual network used by VMs is based on those openflows. And restarting ovs-vswitchd can’t reinstall the openflows. That’s why when the ovs-vswitchd restart, you will lose the connections to VMs. I think we need to figure out why ovs-vswitchd is restarted when you restart the VM. Could you please check the below logs to see why ovs-vswitchd is restarted? /var/log/openvswitch/ovs-vswitchd.log /var/log/syslog neutron log (the log file is specified in /etc/neutron/neutron.conf) Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Thursday, July 18, 2019 7:09 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart I also find that when the ovs-vswitchd is restarted, I will lose the connections to VMs. Before restart ovs: controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever After: controller-0:/home/wrsroot# systemctl restart ovs-vswitchd controller-0:/home/wrsroot# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.152/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: enp59s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether f8:f2:1e:64:01:21 brd ff:ff:ff:ff:ff:ff 4: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 4c:d9:8f:72:fb:63 brd ff:ff:ff:ff:ff:ff inet 172.16.180.154/24 brd 172.16.180.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::4ed9:8fff:fe72:fb63/64 scope link valid_lft forever preferred_lft forever 5: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:72:fb:64 brd ff:ff:ff:ff:ff:ff 6: enp94s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:88 brd ff:ff:ff:ff:ff:ff 7: enp94s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 4c:d9:8f:5e:ab:89 brd ff:ff:ff:ff:ff:ff 8: enp175s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:60 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8060/64 scope link valid_lft forever preferred_lft forever 9: enp175s0f1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether f8:f2:1e:64:80:61 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:8061/64 scope link valid_lft forever preferred_lft forever 10: ovs-netdev: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 62:34:73:47:50:be brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether c6:91:5d:27:b4:43 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether f8:f2:1e:64:01:20 brd ff:ff:ff:ff:ff:ff inet6 fe80::faf2:1eff:fe64:120/64 scope link valid_lft forever preferred_lft forever 17: lldp16ba3755-27: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 7e:3a:87:a3:98:03 brd ff:ff:ff:ff:ff:ff inet6 fe80::7c3a:87ff:fea3:9803/64 scope link valid_lft forever preferred_lft forever 20: tapfb74713e-cc: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0e:2d:36:ee:2d:85 brd ff:ff:ff:ff:ff:ff 21: tap1a965902-0b: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 5e:f6:81:31:54:21 brd ff:ff:ff:ff:ff:ff One of the VMs: On Jul 18, 2019, at 18:09, 张鲲鹏 > wrote: Hi Xu,Chenjie I have tried to create VM with 2 pci_passthrough network ports without DPDK, there was the same problem when I rebooted it. Also, it was same when I reboot the VM with 2 SR-IOV VFs. Do you have any ideas to debug this problem? Thanks Kunpeng On Jul 17, 2019, at 14:59, Xu, Chenjie > wrote: Hi Kunpeng, Maybe you can use SR-IOV and passthrough the VF which has similar performance to physical NIC to the VM. And then you can use DPDK inside the VM with the VF. Sorry, I don’t have easy way to disable DPDK in stx1.0. The following command is used for stx2.0 which is still in progress: system modify --vswitch_type none Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 5:40 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi Chenjie, Well, I will try the network topology as you said. But passthrough NIC with DPDK is our customer’s requirement. And do you have some easy ways to disable dpdk of openvswitch in stx1.0? I had tried to execute “system modify --vswitch_type none” before “system host-unlock controller-0", but it doesn’t work well. Thanks Kunpeng On Jul 16, 2019, at 16:49, Xu, Chenjie > wrote: Hi Kunpeng, When you reboot the VM with two physical pci-passthrough NICs, ovs-vswtichd is restarted and the interfaces and bridges are down. The virtual networks used by the VMs are based on these interfaces and bridges. So other VMs will lost connections. Typically you should not pass through a PCI device which is bound to DPDK to the VM. Are the 2 network ports which is used to passthrough to the VM bound to DPDK? If so, could you please try OVS-DPDK with the following network topology: 2 network port without DPDK > VM 2 network port with DPDK > Data Network 1 network port without DPDK > OAM Best Regards, Xu, Chenjie From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, July 16, 2019 3:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Starlingx-disscuss][ovs-dpdk] Reboot instance with physical pci-passthrough NIC causes OVS restart Hi guys, Recently I got a strange problem in StarlingX. When I reboot one VM with two physical pci-passthrough NICs, then all of VMs cannot be connected. I lost connections with all VMs and also the VMs lost each others. Below is the StarlingX environment. 1. stx1.0 version, bootimage[1] 2. Simplex deployment 3. 5 Network ports. Only one don’t support DPDK,and it is used to OAM Network. In the rest, two are used to data network, and another two are used to passthrough to a VM. 4. The VM was attached two more virtual networks. I have tested the case of attaching one virtual net, it was no problem. When I reboot the VM, something were happened. The interfaces and bridges were down, all the virtual dhcp services were down and ovs-vswitchd was restarted. But when I up the interfaces and dhcp services and reboot the other VMs, I have got the connections with them again. It’s ok when to reboot the VM without physical NIC. We think it may be caused by ovs-dpdk, so we stop to use ovs-dpdk and start the ovs manually, the problem was gone. I cannot understand the problem, anybody could give me some comments for it? Thanks a lot. [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/r2018.10/centos/2018.10.0/outputs/iso/bootimage.iso Kunpeng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramos.escobarx.hector.ivan at intel.com Wed Jul 31 18:32:33 2019 From: ramos.escobarx.hector.ivan at intel.com (Hector Ivan, Ramos EscobarX) Date: Wed, 31 Jul 2019 18:32:33 +0000 Subject: [Starlingx-discuss] Commands for dns resolution Message-ID: Hi, currently im working on a TC that uses the next commands: >> system service-parameter-add network ml2 extension_drivers=dns >> system service-parameter-add network default dns_domain=wrs_dns.com Both show the next error: >> Invalid service name network. Can someone provide the current command used to add this parameters? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jul 31 19:58:02 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 31 Jul 2019 19:58:02 +0000 Subject: [Starlingx-discuss] First Contact SIG (Aug 1, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ABEA4F@ALA-MBD.corp.ad.wrs.com> Hi all - for those that would like to join, there will be a First Contact SIG call tomorrow at 1330 UTC (see [1] for start time in your timezone). Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-first-contact [1] meeting start time in various timezones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190801T1330 -----Original Message----- From: Zvonar, Bill Sent: Thursday, July 4, 2019 10:32 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] First Contact SIG (July 4, 2019) Apologies for forgetting to send a reminder out before today's call. Today we discussed the historical responsiveness to questions on the mailing list, focusing on those who are looking for help & are more 'new' than others. The results for May are captured on the etherpad [0] under the heading "Mailing List Responsiveness". We'll discuss in next week's community call, comments are welcome here or on the etherpad. Bill... [0] https://etherpad.openstack.org/p/stx-first-contact _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Anirudh.Gupta at hsc.com Wed Jul 31 05:08:17 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Wed, 31 Jul 2019 05:08:17 +0000 Subject: [Starlingx-discuss] Config-file approach to run Config-controller In-Reply-To: <72AD03D27224C74982BE13246D75B39739A3F659@SHSMSX103.ccr.corp.intel.com> References: <72AD03D27224C74982BE13246D75B39739A3F659@SHSMSX103.ccr.corp.intel.com> Message-ID: Hi, Thanks Yan. Regards Anirudh Gupta From: Chen, Yan Sent: 31 July 2019 06:55 To: Anirudh Gupta ; starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: RE: [Starlingx-discuss] Config-file approach to run Config-controller Hi, You can try the attached sample files. Yan From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Tuesday, July 30, 2019 19:03 To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: [Starlingx-discuss] Config-file approach to run Config-controller Hi Team, I am using StarlingX 2018.10 Release and want to reduce the manual effort in running "config-controller". For this I found a way in which we can pass a Config-File as a parameter while running the "config-controller". localhost:~$ config_controller --help Usage: /usr/bin/config_controller Perform system configuration The default action is to perform the initial configuration for the system. The following options are also available: --config-file Perform configuration using INI file --backup Backup configuration using the given name --clone-iso Clone and create an image with the given file name --clone-status Status of the last installation of cloned image --restore-system Restore system configuration from backup file with the given name, full path required --restore-images Restore images from backup file with the given name, full path required --restore-complete Complete restore of controller-0--allow-ssh Allow configuration to be executed in ssh But I can't find any Sample config-file available in the documents. Also I found a bug https://bugs.launchpad.net/starlingx/+bug/1814833 which has issue in running the config_file. Can someone please tell if the method of passing config-file in supported in StarlingX 2018.10? If yes, can you please share a sample config-file? If the method is not supported, can someone please share a method to reduce the manual efforts and automate the inputs required in the config_controller? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shaxiaoz_7443 at qq.com Wed Jul 31 09:49:49 2019 From: shaxiaoz_7443 at qq.com (=?gb18030?B?NTA0NjI2Njg0?=) Date: Wed, 31 Jul 2019 17:49:49 +0800 Subject: [Starlingx-discuss] =?gb18030?b?u9i4tKO6UkU6ICC72Li0OiDH673MU3Rh?= =?gb18030?b?cmxpbmdYtcTKudPDzsrM4g==?= In-Reply-To: References: <379baff5-a723-46c4-9932-9037b49f5390.xiongzhiwei@baicells.com> <2FD5DDB5A04D264C80D42CA35194914F35FB24BD@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi, Recently I am reading the Starlingx project code. Now I have a question about the project Update. I know the project is using the Pecan to be the HTTP web. In the entry file "update\cgcs-patch\cgcs-patch\cgcs_patch\api\controllers\root.py " the variable pc (from cgcs_patch.patch_controller import pc)are been used everywhere, but I can't find where it is been inited. So I think when the project start, the variable pc is none. Waiting for your reply. Thank you very much. Hope you all happy. -- Liu Zheng ------------------ 原始邮件 ------------------ 发件人: "504626684"; 发送时间: 2019年6月26日(星期三) 上午9:17 收件人: "Xie, Cindy";"xiongzhiwei";"starlingx-discuss"; 主题: 回复:RE: [Starlingx-discuss] 回复: 请教StarlingX的使用问题 Hi,cindy, Hi,xiongzhiwei, I am very exciting for your reply and very thankful. Now I have downloaded the IOS from Cengen server, but because of without E5 computer, I have not run yet. Soon I will have one , then I try again. I have read the wiki and Starlingx.io document, but without a demo to run, I can't practice more. Then I want to learn the github project code. Yeasterday I opened them by Pycharm and read a little, but I can't run yet. If someone has a guide to read the project, or the project software environment to use, please tell me, it will help me a lot. Thank you all again and I very happy to learn and discuss with all of you. Hope you all happy Liu Zheng. ------------------ 原始邮件 ------------------ 发件人: "Xie, Cindy"; 发送时间: 2019年6月26日(星期三) 上午9:03 收件人: "xiongzhiwei";"504626684";"starlingx-discuss"; 主题: RE: [Starlingx-discuss] 回复: 请教StarlingX的使用问题 Hi, Liu Zheng, Not sure if you’ve downloaded an ISO from Cengen server, there is daily build we’ve published – select the one we have “green” sanity results and follow the wiki page for deployment. You may have to setup some proxy or local registry if you’re in PRC due to firewall issue. Please use this mailing list for any issues you encountered. Thx. - cindy From: xiongzhiwei [mailto:xiongzhiwei at baicells.com] Sent: Tuesday, June 25, 2019 11:59 PM To: 504626684 ; starlingx-discuss Subject: [Starlingx-discuss] 回复: 请教StarlingX的使用问题 我是二三月份研究过一阵,已有三个月没关注了,你可以找intel的人问问,他们是主力,主力团队在上海。有什么问题也可以在邮件列表中随时反馈 来自钉钉专属商务邮箱 ------------------------------------------------------------------ 发件人:504626684 日 期:2019年06月25日 11:13:51 收件人:starlingx-discuss 主 题:[Starlingx-discuss] 请教StarlingX的使用问题 您好: 我叫刘政,是一个程序员。最近在学习边缘计算和StarlingX。通过https://www.starlingx.io/学习StarlingX的文档,大概了解了StarlingX的结构和部署, 但还是没能明白StarlingX具体应该怎么用,那接下来我应该再看哪些资料。麻烦您那里有一些其他的资料或者Demo参考吗? 非常感谢,祝福一切都好。 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Wed Jul 31 20:07:14 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Wed, 31 Jul 2019 20:07:14 +0000 Subject: [Starlingx-discuss] [Containers] How to retrieve data from a helm chart already applied Message-ID: <07CBA65D-3C8C-404A-80B9-E303197D429E@intel.com> Hi I’m trying to inspect a chart already applied on the StarlignX deployment, does anyone knows if I’m missing something on the steps mentioned below, because currently I’m not able to retrieve the data. controller-0:~$ helm list |grep nova osh-openstack-nova 1 Thu Jul 18 19:32:41 2019 DEPLOYED nova-0.1.0 openstack osh-openstack-nova-api-proxy 1 Thu Jul 18 19:32:41 2019 DEPLOYED nova-api-proxy-0.1.0 openstack ------- controller-0:~$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Unable to get an update from the "starlingx" chart repository (http://127.0.0.1:8080/helm_charts/starlingx): Get http://127.0.0.1:8080/helm_charts/starlingx/index.yaml: dial tcp 127.0.0.1:8080: connect: connection refused ...Unable to get an update from the "stx-platform" chart repository (http://127.0.0.1:8080/helm_charts/stx-platform): Get http://127.0.0.1:8080/helm_charts/stx-platform/index.yaml: dial tcp 127.0.0.1:8080: connect: connection refused ...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com): Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp 172.217.164.112:443: connect: connection timed out Update Complete. ⎈ Happy Helming!⎈ controller-0:~$ helm inspect osh-openstack-nova Error: failed to download "osh-openstack-nova" (hint: running `helm repo update` may help) Regards Jose -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Wed Jul 31 20:17:36 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 31 Jul 2019 20:17:36 +0000 Subject: [Starlingx-discuss] =?gb2312?b?u9i4tKO6UkU6ICC72Li0OiDH673MU3Rh?= =?gb2312?b?cmxpbmdYtcTKudPDzsrM4g==?= In-Reply-To: References: <379baff5-a723-46c4-9932-9037b49f5390.xiongzhiwei@baicells.com> <2FD5DDB5A04D264C80D42CA35194914F35FB24BD@SHSMSX104.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC153F20D@ALA-MBD.corp.ad.wrs.com> This is instantiated in the main() of patch_controller.py: https://opendev.org/starlingx/update/src/branch/master/cgcs-patch/cgcs-patch/cgcs_patch/patch_controller.py#L2596 From: 504626684 [mailto:shaxiaoz_7443 at qq.com] Sent: Wednesday, July 31, 2019 5:50 AM To: Xie, Cindy; xiongzhiwei; starlingx-discuss Subject: [Starlingx-discuss] �ظ���RE: �ظ�: ���StarlingX��ʹ������ Hi, Recently I am reading the Starlingx project code. Now I have a question about the project Update. I know the project is using the Pecan to be the HTTP web. In the entry file "update\cgcs-patch\cgcs-patch\cgcs_patch\api\controllers\root.py " the variable pc (from cgcs_patch.patch_controller import pc��are been used everywhere, but I can't find where it is been inited. So I think when the project start, the variable pc is none. Waiting for your reply. Thank you very much. Hope you all happy[Image removed by sender.]. -- Liu Zheng ------------------ ԭʼ�ʼ� ------------------ ������: "504626684"; ����ʱ��: 2019��6��26��(������) ����9:17 �ռ���: "Xie, Cindy";"xiongzhiwei";"starlingx-discuss"; ����: �ظ���RE: [Starlingx-discuss] �ظ�: ���StarlingX��ʹ������ Hi,cindy, Hi,xiongzhiwei, I am very exciting for your reply and very thankful. Now I have downloaded the IOS from Cengen server, but because of without E5 computer, I have not run yet. Soon I will have one , then I try again. I have read the wiki and Starlingx.io document, but without a demo to run, I can't practice more. Then I want to learn the github project code. Yeasterday I opened them by Pycharm and read a little, but I can't run yet. If someone has a guide to read the project, or the project software environment to use, please tell me, it will help me a lot. Thank you all again and I very happy to learn and discuss with all of you. Hope you all happy [Image removed by sender.] Liu Zheng. ------------------ ԭʼ�ʼ� ------------------ ������: "Xie, Cindy"; ����ʱ��: 2019��6��26��(������) ����9:03 �ռ���: "xiongzhiwei";"504626684";"starlingx-discuss"; ����: RE: [Starlingx-discuss] �ظ�: ���StarlingX��ʹ������ Hi, Liu Zheng, Not sure if you��ve downloaded an ISO from Cengen server, there is daily build we��ve published �C select the one we have ��green�� sanity results and follow the wiki page for deployment. You may have to setup some proxy or local registry if you��re in PRC due to firewall issue. Please use this mailing list for any issues you encountered. Thx. - cindy From: xiongzhiwei [mailto:xiongzhiwei at baicells.com] Sent: Tuesday, June 25, 2019 11:59 PM To: 504626684 ; starlingx-discuss Subject: [Starlingx-discuss] �ظ�: ���StarlingX��ʹ������ ���Ƕ����·��о���һ������������û��ע�ˣ��������intel�������ʣ������������������Ŷ����Ϻ�����ʲô����Ҳ�������ʼ��б�����ʱ���� ���Զ���ר���������� ------------------------------------------------------------------ �����ˣ�504626684> �ա��ڣ�2019��06��25�� 11:13:51 �ռ��ˣ�starlingx-discuss> �����⣺[Starlingx-discuss] ���StarlingX��ʹ������ ���ã� �ҽ���������һ������Ա�������ѧϰ��Ե�����StarlingX��ͨ��https://www.starlingx.io/ѧϰStarlingX���ĵ�������˽���StarlingX�Ľṹ�Ͳ��� ������û������StarlingX����Ӧ����ô�ã��ǽ�������Ӧ���ٿ���Щ���ϡ��鷳��������һЩ���������ϻ���Demo�ο��� �dz���л��ף��һ�ж��á� -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ~WRD000.jpg Type: image/jpeg Size: 823 bytes Desc: ~WRD000.jpg URL: From Robert.Church at windriver.com Wed Jul 31 21:40:28 2019 From: Robert.Church at windriver.com (Church, Robert) Date: Wed, 31 Jul 2019 21:40:28 +0000 Subject: [Starlingx-discuss] [Containers] How to retrieve data from a helm chart already applied In-Reply-To: <07CBA65D-3C8C-404A-80B9-E303197D429E@intel.com> References: <07CBA65D-3C8C-404A-80B9-E303197D429E@intel.com> Message-ID: <5CF2347F-8E6F-4CE7-BB26-BE828252FD01@windriver.com> Hi Jose, You use “helm inspect ” to look at the chart’s values in the helm repo: [sysadmin at controller-0 ~(keystone_admin)]$ helm inspect starlingx/nova | head apiVersion: v1 description: OpenStack-Helm Nova home: https://docs.openstack.org/nova/latest/ icon: https://www.openstack.org/themes/openstack/images/project-mascots/Nova/OpenStack_Project_Nova_vertical.png maintainers: - name: OpenStack-Helm Authors name: nova sources: - https://git.openstack.org/cgit/openstack/nova - https://git.openstack.org/cgit/openstack/openstack-helm You use “helm get values ” to get current values of the released service: [sysadmin at controller-0 ~(keystone_admin)]$ helm get values osh-openstack-nova | head ceph_client: user_secret_name: ceph-pool-kube-rbd conf: ceph: enabled: true ephemeral_storage: rbd_pools: - rbd_chunk_size: 512 rbd_crush_rule: storage_tier_ruleset rbd_pool_name: ephemeral Regards, Bob From: "Perez Carranza, Jose" Date: Wednesday, July 31, 2019 at 3:09 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] [Containers] How to retrieve data from a helm chart already applied Hi I’m trying to inspect a chart already applied on the StarlignX deployment, does anyone knows if I’m missing something on the steps mentioned below, because currently I’m not able to retrieve the data. controller-0:~$ helm list |grep nova osh-openstack-nova 1 Thu Jul 18 19:32:41 2019 DEPLOYED nova-0.1.0 openstack osh-openstack-nova-api-proxy 1 Thu Jul 18 19:32:41 2019 DEPLOYED nova-api-proxy-0.1.0 openstack ------- controller-0:~$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Unable to get an update from the "starlingx" chart repository (http://127.0.0.1:8080/helm_charts/starlingx): Get http://127.0.0.1:8080/helm_charts/starlingx/index.yaml: dial tcp 127.0.0.1:8080: connect: connection refused ...Unable to get an update from the "stx-platform" chart repository (http://127.0.0.1:8080/helm_charts/stx-platform): Get http://127.0.0.1:8080/helm_charts/stx-platform/index.yaml: dial tcp 127.0.0.1:8080: connect: connection refused ...Unable to get an update from the "stable" chart repository (https://kubernetes-charts.storage.googleapis.com): Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp 172.217.164.112:443: connect: connection timed out Update Complete. ⎈ Happy Helming!⎈ controller-0:~$ helm inspect osh-openstack-nova Error: failed to download "osh-openstack-nova" (hint: running `helm repo update` may help) Regards Jose -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Jul 31 21:46:44 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 31 Jul 2019 21:46:44 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190731 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-31 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Wed Jul 31 21:51:28 2019 From: yong.hu at intel.com (Yong Hu) Date: Wed, 31 Jul 2019 14:51:28 -0700 Subject: [Starlingx-discuss] [Containers] How to retrieve data from a helm chart already applied In-Reply-To: <07CBA65D-3C8C-404A-80B9-E303197D429E@intel.com> References: <07CBA65D-3C8C-404A-80B9-E303197D429E@intel.com> Message-ID: <7b5fd46f-48a4-17bf-c8f1-844e53920911@intel.com> Have a try with setting NO_PROXY env: export NO_PROXY="10.*, *.intel.com, localhost, 127.0.0.1,192.168.206.2" On 31/07/2019 1:07 PM, Perez Carranza, Jose wrote: > Hi > > I’m trying to inspect a chart already applied on the StarlignX > deployment, does anyone knows if I’m missing something on the steps > mentioned below, because currently I’m not able to retrieve the data. > > controller-0:~$ helm list |grep nova > > osh-openstack-nova                      1             Thu Jul 18 > 19:32:41 2019               DEPLOYED           nova-0.1.0 >                                      openstack > > osh-openstack-nova-api-proxy                   1             Thu Jul 18 > 19:32:41 2019               DEPLOYED           nova-api-proxy-0.1.0 >                                openstack > > ------- > > controller-0:~$ helm repo update > > Hang tight while we grab the latest from your chart repositories... > > ...Skip local chart repository > > ...Unable to get an update from the "starlingx" chart repository > (http://127.0.0.1:8080/helm_charts/starlingx): > >                 Get > http://127.0.0.1:8080/helm_charts/starlingx/index.yaml: dial tcp > 127.0.0.1:8080: connect: connection refused > > ...Unable to get an update from the "stx-platform" chart repository > (http://127.0.0.1:8080/helm_charts/stx-platform): > >                 Get > http://127.0.0.1:8080/helm_charts/stx-platform/index.yaml: dial tcp > 127.0.0.1:8080: connect: connection refused > > ...Unable to get an update from the "stable" chart repository > (https://kubernetes-charts.storage.googleapis.com): > >                 Get > https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp > 172.217.164.112:443: connect: connection timed out > > Update Complete. ⎈Happy Helming!⎈ > > controller-0:~$ helm inspect osh-openstack-nova > > Error: failed to download "osh-openstack-nova" (hint: running `helm repo > update` may help) > > Regards > > Jose > > -- > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From michael.l.tullis at intel.com Wed Jul 31 22:04:45 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 31 Jul 2019 22:04:45 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 7/31/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B9DFDB@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Thanks to the team for long work hours and extra diligence this week. Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: