From Ian.Jolliffe at windriver.com Mon Jun 1 02:07:32 2020 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Mon, 1 Jun 2020 02:07:32 +0000 Subject: [Starlingx-discuss] [TSC] Minutes 5/27 Message-ID: PTG starts tomorrow – all are welcome – Etherpad here: https://etherpad.opendev.org/p/stx-virtual-PTG-June Please put your name on the etherpad if you plan on joining. Notes from TSC Meeting: * Final prep for PTG - Starts June 1st o Airship joint session during the Monday time slot, they would prefer some time earlier (ildikov) § Airship plans to share their project changes § Do this first and then move to StarlingX PTG agenda. § Ildiko confirmed * License review process - please read prior to meeting o https://governance.openstack.org/tc/reference/licensing.html § OpenStack mandates (and ensures via a legal agreement) that all software written is made available under apache license version 2; it is possible because all code is written within the project § Some of OpenSTack's software can be considered derivative works of its dependencies, so the OpenStack Requirements team reviews the licenses of dependencies as they're added, tracked centrally in a single file: https://opendev.org/openstack/requirements/src/branch/master/global-requirements.txt § In addition Zuul has some software derived from ansible under GPL, v3: https://opendev.org/zuul/zuul#user-content-license · As stated on the page above, they make sure that the comments at the tops of individual source code files reflect the corresponding licenses for them § Are we covered on the integrated pieces - vs Flock code. Depending licenses - what else do we need to do? Cover in PTG - find a time slot, so foundation people can help guide us. We just need to document the approach. * TSC election o https://review.opendev.org/730969 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Jun 1 05:04:33 2020 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 1 Jun 2020 05:04:33 +0000 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> Message-ID: <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> Hi Ildikó, We haven't seen the dedicated zoom bridge sent for the vPTG. Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? Regards, Yong On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: Hi StarlingX Community, As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. Please let me know if you have any questions. Thanks, Ildikó [1] https://www.openstack.org/ptg/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Mon Jun 1 08:20:15 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 1 Jun 2020 08:20:15 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Mon Jun 1 11:49:35 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 1 Jun 2020 13:49:35 +0200 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> Message-ID: <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> Hi Yong, It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ Thanks, Ildikó > On Jun 1, 2020, at 07:04, Hu, Yong wrote: > > Hi Ildikó, > We haven't seen the dedicated zoom bridge sent for the vPTG. > Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? > > Regards, > Yong > > On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: > > Hi StarlingX Community, > > As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). > > Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. > > As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. > > Please let me know if you have any questions. > > Thanks, > Ildikó > > [1] https://www.openstack.org/ptg/ > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From Brent.Rowsell at windriver.com Mon Jun 1 14:13:13 2020 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 1 Jun 2020 14:13:13 +0000 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> Message-ID: I've registered but have not received a pw. Brent -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Monday, June 1, 2020 7:50 AM To: Hu, Yong Cc: starlingx Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT Hi Yong, It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ Thanks, Ildikó > On Jun 1, 2020, at 07:04, Hu, Yong wrote: > > Hi Ildikó, > We haven't seen the dedicated zoom bridge sent for the vPTG. > Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? > > Regards, > Yong > > On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: > > Hi StarlingX Community, > > As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). > > Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. > > As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. > > Please let me know if you have any questions. > > Thanks, > Ildikó > > [1] https://www.openstack.org/ptg/ > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Barton.Wensley at windriver.com Mon Jun 1 14:19:23 2020 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Mon, 1 Jun 2020 14:19:23 +0000 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> Message-ID: They are sending the password to the email you used to register (with eventbrite). For me, the password was in an email titled "24 hours left until the PTG!", which was easy to miss. Bart -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: June 1, 2020 10:13 AM To: Ildiko Vancsa; Hu, Yong Cc: starlingx Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT I've registered but have not received a pw. Brent -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Monday, June 1, 2020 7:50 AM To: Hu, Yong Cc: starlingx Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT Hi Yong, It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ Thanks, Ildikó > On Jun 1, 2020, at 07:04, Hu, Yong wrote: > > Hi Ildikó, > We haven't seen the dedicated zoom bridge sent for the vPTG. > Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? > > Regards, > Yong > > On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: > > Hi StarlingX Community, > > As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). > > Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. > > As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. > > Please let me know if you have any questions. > > Thanks, > Ildikó > > [1] https://www.openstack.org/ptg/ > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From kire at kth.se Mon Jun 1 16:16:22 2020 From: kire at kth.se (=?utf-8?B?SmFuLUVyaWsgTcOlbmdz?=) Date: Mon, 1 Jun 2020 16:16:22 +0000 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> Message-ID: <386A1444-4DA3-4000-9FB9-9C38D375BA4A@kth.se> I also didn’t receive a pw, and I can’t find any “24 hours left until the PTG!”-email either. /Jan-Erik (registered with eventbrite using my corporate email jan-erik.mangs at ericsson.com) 1 juni 2020 kl. 16:19 skrev Wensley, Barton >: They are sending the password to the email you used to register (with eventbrite). For me, the password was in an email titled "24 hours left until the PTG!", which was easy to miss. Bart -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: June 1, 2020 10:13 AM To: Ildiko Vancsa; Hu, Yong Cc: starlingx Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT I've registered but have not received a pw. Brent -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Monday, June 1, 2020 7:50 AM To: Hu, Yong > Cc: starlingx > Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT Hi Yong, It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ Thanks, Ildikó On Jun 1, 2020, at 07:04, Hu, Yong > wrote: Hi Ildikó, We haven't seen the dedicated zoom bridge sent for the vPTG. Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? Regards, Yong On 2020/5/26, 9:54 PM, "Ildiko Vancsa" > wrote: Hi StarlingX Community, As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. Please let me know if you have any questions. Thanks, Ildikó [1] https://www.openstack.org/ptg/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Jun 1 16:25:08 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 1 Jun 2020 18:25:08 +0200 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: <386A1444-4DA3-4000-9FB9-9C38D375BA4A@kth.se> References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> <386A1444-4DA3-4000-9FB9-9C38D375BA4A@kth.se> Message-ID: Hi, Sorry, I was running the edge session, so was limited in mails. If someone did not receive mails from Eventbrite with further details and/or still having issues please reach out in mail to the PTG helpdesk: ptg at openstack.org Thanks, Ildikó > On Jun 1, 2020, at 18:16, Jan-Erik Mångs wrote: > > I also didn’t receive a pw, and I can’t find any “24 hours left until the PTG!”-email either. > > /Jan-Erik > (registered with eventbrite using my corporate email jan-erik.mangs at ericsson.com) > > > >> 1 juni 2020 kl. 16:19 skrev Wensley, Barton : >> >> They are sending the password to the email you used to register (with eventbrite). >> >> For me, the password was in an email titled "24 hours left until the PTG!", which was easy to miss. >> >> Bart >> >> -----Original Message----- >> From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] >> Sent: June 1, 2020 10:13 AM >> To: Ildiko Vancsa; Hu, Yong >> Cc: starlingx >> Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT >> >> I've registered but have not received a pw. >> >> Brent >> >> -----Original Message----- >> From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] >> Sent: Monday, June 1, 2020 7:50 AM >> To: Hu, Yong >> Cc: starlingx >> Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT >> >> Hi Yong, >> >> It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. >> >> The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. >> >> You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. >> >> __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ >> >> Thanks, >> Ildikó >> >> >>> On Jun 1, 2020, at 07:04, Hu, Yong wrote: >>> >>> Hi Ildikó, >>> We haven't seen the dedicated zoom bridge sent for the vPTG. >>> Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? >>> >>> Regards, >>> Yong >>> >>> On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: >>> >>> Hi StarlingX Community, >>> >>> As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). >>> >>> Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. >>> >>> As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. >>> >>> Please let me know if you have any questions. >>> >>> Thanks, >>> Ildikó >>> >>> [1] https://www.openstack.org/ptg/ >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From yong.hu at intel.com Mon Jun 1 14:49:39 2020 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 1 Jun 2020 14:49:39 +0000 Subject: [Starlingx-discuss] proposals for STX.5.0 - to present in this incoming vPTG Message-ID: Hi Folks, We did a bit homework for StarlingX vPTG topics and here are 3 proposals to present during vPTG, please have a quick look and share your feedback with us: 1. Sdo_proposal.pdf: Use Intel SDO to get small nodes on-board in the context of StarlingX. – Presenter: Yi 2. Starlingx AppHub.pdf: create a project to host “Applications” like, EdgeX, K8S dashboard, Intel EB (RNI) etc., so that StarlingX users can get one-stop solution for testing or evaluation. - Presenter: Mingyuan 3. Hummingbird: a solution to get the *small node* joining StarlingX K8S cluster, by working as a kubelet. – Presenter: Mingyuan   Regards, Yong -------------- next part -------------- A non-text attachment was scrubbed... Name: sdo_proposal.pdf Type: application/pdf Size: 356295 bytes Desc: sdo_proposal.pdf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: StarlingX AppHub.pdf Type: application/pdf Size: 141947 bytes Desc: StarlingX AppHub.pdf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Hummingbird - StarlingX small node management.pdf Type: application/pdf Size: 444879 bytes Desc: Hummingbird - StarlingX small node management.pdf URL: From allison at openstack.org Mon Jun 1 19:55:45 2020 From: allison at openstack.org (Allison Price) Date: Mon, 1 Jun 2020 14:55:45 -0500 Subject: [Starlingx-discuss] StarlingX Press Release Draft Message-ID: <9D8A18D4-1761-4860-89AA-7B25D2DD113F@openstack.org> Hi everyone, I hope you’re having a great week at the PTG! Below is a link to the press release draft for the potential StarlingX confirmation on June 11 with the OSF Board of Directors. If your organization is contributing to StarlingX and would like to provide a quote for the press release or have any feedback, please reach out to me directly. Thanks, Allison https://docs.google.com/document/d/1VhVUNuBJZ6NuEGix_L5PIcOiYV2RM_K4KAIXjDa_W1M/edit?usp=sharing From nicolae.jascanu at intel.com Mon Jun 1 20:21:32 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Mon, 1 Jun 2020 20:21:32 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200530T013359Z Message-ID: Sanity Test from 2020-May-30 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200530T013359Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200530T013359Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test on Virtual Environment was NOT executed because the setup was used for debugging and regression testing Regards, STX Validation Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Jun 1 21:07:04 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 1 Jun 2020 23:07:04 +0200 Subject: [Starlingx-discuss] StarlingX PTG session starts in less than an hour Message-ID: <546392F7-8881-4F2B-9C52-43D53E7AE421@gmail.com> Hi, It is a friendly reminder that the StarlingX session at the virtual PTG event starts in less than an hour. If you already registered for the event you should have received information in email with details about how to join. If you are interested in attending but haven’t registered yet please do so here: https://virtualptgjune2020.eventbrite.com Once you registered you will receive all the necessary information about participating in the event. For agenda please see the following etherpad: https://etherpad.opendev.org/p/stx-virtual-PTG-June I would also like to remind you to the joint session with the Kata Containers community tomorrow at 1400 UTC. See you in a bit! Thanks, Ildikó From tyler.smith at windriver.com Mon Jun 1 21:40:23 2020 From: tyler.smith at windriver.com (Smith, Tyler) Date: Mon, 1 Jun 2020 21:40:23 +0000 Subject: [Starlingx-discuss] Fault Containerization: Enable FM panels in Openstack Dashboard In-Reply-To: <08A07A3B6772DE42BB77D7AE70889B8A968E95E3@BGSMSX103.gar.corp.intel.com> References: <08A07A3B6772DE42BB77D7AE70889B8A8F09359C@BGSMSX101.gar.corp.intel.com> <08A07A3B6772DE42BB77D7AE70889B8A968E95E3@BGSMSX103.gar.corp.intel.com> Message-ID: Responses inline Thanks, Tyler From: Das, Ambarish [mailto:ambarish.das at intel.com] Sent: Friday, May 29, 2020 7:39 AM To: Smith, Tyler ; Penney, Don ; Mukherjee, Sanjay K Cc: Wold, Saul ; Jones, Bruce E ; Bhat, Gopalkrishna ; starlingx-discuss at lists.starlingx.io; Sun, Austin ; Eslimi, Dariush Subject: RE: Fault Containerization: Enable FM panels in Openstack Dashboard Hi Tyler, Thanks for explaining the details and we have few queries inline Thanks & regards, Ambarish/Sanjay From: Smith, Tyler > Sent: Friday, May 15, 2020 1:35 AM To: Das, Ambarish >; Penney, Don >; Mukherjee, Sanjay K > Cc: Wold, Saul >; Jones, Bruce E >; Bhat, Gopalkrishna >; starlingx-discuss at lists.starlingx.io; Sun, Austin >; Eslimi, Dariush > Subject: RE: Fault Containerization: Enable FM panels in Openstack Dashboard Hi Ambarish & Sanjay There were two approaches that were being looked at. The first was to use the same GUI plugin for both the platform horizon and containerized horizon, but only copy over the horizon 'enabled' files corresponding to the panels that we want to enable (fault panels in the containerized case). This is the approach that was tried but it ended up not working and required lots of hacks during the docker image build step, such as modifying the code, which we really want to avoid. The reasons it wasn't working weren't really clear to me, I didn't spend time debugging etc. Attached is some background on what was being discussed then. [AD/SM]: We are clear with this approach and I believe the abandoned patch has the required hack for this implementation (https://review.opendev.org/#/c/661423/4). We are able to reproduce this step with docker image build for stx-horizon and FM Panel is visible in openstack dashboard. Please let us know if anything wrong in this understanding/reproduction steps. [TS] Yes, the abandoned patch was working, but need to find a way to do it without those kinds of hacks The decision was made to instead split our plugin into two, one for the platform panels, and one for just the fault panels. This will involve creating a new package next to starlingx-dashboard (in the same repo though) that has a similar structure but only has the relevant fault components. Including: Api/fm.py Api/rest/fm.py Dashboards/admin/active_alarms/ Static/dashboard/fault_management/ Enabled/ -> need the fm related enabled files in here, along with the banner view header section definition (see ADD_HEADER_SECTIONS). These files will get copied over in the docker image build step. The only other instruction in this step should be the csrftoken customization command from the attached email, which I think unfortunately is required. [AD/SM]: As per our understanding all these changes will be part of stx-gui module. Need more information regarding stx-gui component to understand better. Please let us know if any documentation link there to refer for this module ( It would be really helpful if we can approach a POC/module expert for this). Also was there any patch created with these changes earlier? [TS] Yes, the changes will be to stx-gui, there's no specific documentation on that module, but as it is a horizon plugin it will roughly follow the structure and features in the openstack plugin documentation mentioned below. If you have specific questions feel free to ask me. There has been no prior attempt at this approach As for the settings for the containerized horizon, they are stored in the openstack helm application manifest here: openstack-armada-app/stx-openstack-helm/stx-openstack-helm/manifests/manifest.yaml My understanding is fault management will remain in the platform as well. A distributed cloud deployment will also have to be tested, as the dc_admin dashboard also queries fm. There's decent documentation on the plugin structure upstream: https://docs.openstack.org/horizon/latest/contributor/tutorials/plugin.html Let me know if you need more details Tyler From: Das, Ambarish [mailto:ambarish.das at intel.com] Sent: Wednesday, May 13, 2020 2:22 AM To: Penney, Don >; Smith, Tyler > Cc: Wold, Saul >; Jones, Bruce E >; Bhat, Gopalkrishna >; starlingx-discuss at lists.starlingx.io; Mukherjee, Sanjay K >; Sun, Austin > Subject: Fault Containerization: Enable FM panels in Openstack Dashboard Hello Tyler & Don, We have started looking into the remaining work in Fault Containerization and looked into the earlier abandoned patch implementation (https://review.opendev.org/#/c/661423/). As we have joined the team newly, we would like to understand GUI and Horizon implementation and next steps to move forward regarding this pending activity. We had a initial discussion regarding this with Saul and Austin and based on their inputs, we would like to have a discussion. Please let me know if you need any clarification. Thanks & regards, Ambarish/Sanjay -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Jun 1 23:31:10 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 1 Jun 2020 19:31:10 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 130 - Failure! Message-ID: <1819066125.1573.1591054271498.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 130 Status: Failure Timestamp: 20200601T232418Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200601T232418Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From Frank.Miller at windriver.com Mon Jun 1 23:43:14 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 1 Jun 2020 23:43:14 +0000 Subject: [Starlingx-discuss] Sanity TC list (was RE: [OpenStack Ussuri Upgrade Task] Call for patch review!!) Message-ID: Nicolae: Thanks for sending the TC list for sanity. The reason sanity is not seeing the stx-openstack recovery issues after a controller reboot is that TC is not currently in the sanity suite. In the 02-Host-Management testcases I see lock/unlock TCs but not TCs where each controller is rebooted and checked to make sure all the apps and pods recover after the reboot. I suggest you plan to add in this type of testcase into sanity. Frank -----Original Message----- From: Jascanu, Nicolae Sent: Wednesday, May 27, 2020 11:33 AM To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Please find below the list of sanity testcases executed: ######################### Sanity-Openstack ######################### ############# 01-Instance-From-Image.robot ########################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros and Centos instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros and Centos instances. Create Networks For Instances [Documentation] Create networks to be used to launch Cirros and Centos ... instances. Launch Instances [Documentation] Launch Cirros and Centos instances. Suspend Resume Instances [Documentation] Suspend and Resume Cirros and Centos instances. Set Error Active Flags Instances [Documentation] Set 'Error' and 'Active' flags to Cirros and Centos ... instances. Pause Unpause Instances [Documentation] Pause and Unpause Cirros and Centos instances. Stop Start Instances [Documentation] Stop and Start Cirros and Centos instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros and Centos instances. Reboot Instances [Documentation] Reboot Cirros and Centos instances. Rebuild Instances [Documentation] Rebuild Cirros and Centos instances. Resize Instances [Documentation] Resize Cirros instance. Create Flavor ${cirros_flavor_ram} ${cirros_flavor_vcpus} ... ${cirros_flavor_disk} ${cirros_flavor_name_2} Set Unset Properties Instances [Documentation] Set Unset properties of Cirros and Centos instances. Evacuate Instances From Hosts [Documentation] Evacuate all Cirros and Centos instances from computes ... or controllers. ############### 02-Instance-From-Volume.robot ###################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks For Instance [Documentation] Create networks to be used to launch Cirros ... instances. Create Volume For Instances [Documentation] Create volumes with or without properties to be used to ... to launch Cirros instances. Launch Instances [Documentation] Launch Cirros instances. Suspend Resume Instance [Documentation] Suspend and Resume Cirros instances. Set Error Active Flags Instance [Documentation] Set 'Error' and 'Active' flags to Cirros ... instance. Pause Unpause Instances [Documentation] Pause and Unpause Cirros instances. Stop Start Instances [Documentation] Stop and Start Cirros instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros instances. Reboot Instances [Documentation] Reboot Cirros instances. Rebuild Instances [Documentation] Rebuild Cirros instances. Resize Instances [Documentation] Resize Cirros instances. Set Unset Properties Instances [Documentation] Set Unset properties of Cirros instances. Evacuate Instances From Hosts [Documentation] Evacuate all Cirros instances from computes ... or controllers. ############### 03-Instance-From-Snapshot.robot ###################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks For Instance [Documentation] Create networks to be used to launch Cirros and Centos ... instances. Create Volume For Instances [Documentation] Create volumes with or without properties to be used ... to launch Cirros instances. Create Snapshot For Instance [Documentation] Create snapshots with or without properties to be used ... to launch Cirros instances. Launch Instances [Documentation] Launch Cirros instances from snapshot. Suspend Resume Instances [Documentation] Suspend and Resume Cirros instances. Set Error Active Flags Instances [Documentation] Set 'Error' and 'Active' flags to Cirros instances. Pause Unpause Instances [Documentation] Pause and Unpause Cirros instances. Stop Start Instances [Documentation] Stop and Start Cirros instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros instances. Reboot Instances [Documentation] Reboot Cirros instances. Rebuild Instances [Documentation] Rebuild Cirros instances. Resize Instances [Documentation] Resize Cirros instances. Set Unset Properties Instances [Documentation] Set Unset properties of Cirros instances. Evacuate Instances From Hosts [Documentation] Evacuate all instances from computes or ... controllers. ############### 04-Instance-From-Heat-Template.robot ######################## Create Flavors for Instance [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images for Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks for Instance [Documentation] Create networks to be used to launch Cirros ... instances. Create Instance Trough Stack [Documentation] Create a Cirros instance using a heat template ############### 05-Measurements-For-Metric.robot ################# Create Image For Metrics [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Update Image Name [Documentation] Update image name. Update Image Disk Ram Size [Documentation] Update image disk size and ram size. ########################### Sanity-Platform ########################### ############# 01-OpenStack-Pod-Healthy.robot ######################## OpenStack PODs Healthy [Documentation] Check all OpenStack pods are healthy, in Running or ... Completed state. Reapply STX OpenStack [Documentation] Re apply stx openstack application without any ... modification to helm charts. STX OpenStack Override Update Reset [Documentation] Helm override for OpenStack nova chart and reset. Kube System Services [Documentation] Check pods status and kube-system services are ... displayed. Create Check Delete POD [Documentation] Launch a POD via kubectl. ################ 02-Host-Management.robot ######################## Add Controller Host Simplex [Documentation] Try to add a new controller on a Simplex ... configuration, expect to fail. Swact Controller Host Simplex [Documentation] Try to perform a swact controller on a Simplex ... configuration, expect to fail. Lock Active Controller [Documentation] Try to perform a lock to the Active controller Lock Unlock Standby Controller [Documentation] Perform a lock/unlock to the Standby controller Lock Unlock Compute Host [Documentation] Perform a lock/unlock to the compute node Lock Unlock Storage Host [Documentation] Perform a lock/unlock to the storage node Regards, Nicolae Jascanu, Ph.D. TSD Software Engineer Internet Of Things Group Galati, Romania -----Original Message----- From: Miller, Frank Sent: Wednesday, May 27, 2020 17:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Jun 2 05:37:20 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 2 Jun 2020 01:37:20 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 131 - Still Failing! In-Reply-To: <54924716.1571.1591054269707.JavaMail.javamailuser@localhost> References: <54924716.1571.1591054269707.JavaMail.javamailuser@localhost> Message-ID: <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 131 Status: Still Failing Timestamp: 20200602T053143Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200602T053143Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From nicolae.jascanu at intel.com Tue Jun 2 07:44:45 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Tue, 2 Jun 2020 07:44:45 +0000 Subject: [Starlingx-discuss] No new layered builds Message-ID: Hi, Since Saturday, May 30 there are no new builds. The last report was sent for build: 20200530T013359Z Regards, Nicolae Jascanu, Ph.D. TSD Software Engineer [intel-logo] Internet Of Things Group Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3923 bytes Desc: image001.png URL: From shuicheng.lin at intel.com Tue Jun 2 08:25:57 2020 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 2 Jun 2020 08:25:57 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 131 - Still Failing! In-Reply-To: <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> References: <54924716.1571.1591054269707.JavaMail.javamailuser@localhost> <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, I try to reproduce the mirror issue in my local environment. It seems it is due to lack of repodata of " http://mirror.starlingx.cengn.ca/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/". If I switch to use original ceph repo which contains the repodata folder, I could download rpms successfully. But my local error message is not the same as CENGN's. This debug data is just for you reference. [slin14 at 0ca513348895 yum.repos.d]$ sudo -E yumdownloader -q -c /tmp/stx_mirror_BBoGyH/yum.conf --releasever=7 --exclude='*.i686' --archlist=noarch,x86_64 --url rh-python36-runtime-2.0-1.el7 http://mirror.starlingx.cengn.ca/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article https://access.redhat.com/articles/1320623 If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ failure: repodata/repomd.xml from ceph-ussuri: [Errno 256] No more mirrors to try. http://mirror.starlingx.cengn.ca/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found [slin14 at 0ca513348895 yum.repos.d]$ echo $? 1 Best Regards Shuicheng -----Original Message----- From: build.starlingx at gmail.com Sent: Tuesday, June 2, 2020 1:37 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 131 - Still Failing! Project: STX_build_layer_flock_master_master Build #: 131 Status: Still Failing Timestamp: 20200602T053143Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200602T053143Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jun 2 08:48:11 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 2 Jun 2020 08:48:11 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From nicolae.jascanu at intel.com Tue Jun 2 09:29:24 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Tue, 2 Jun 2020 09:29:24 +0000 Subject: [Starlingx-discuss] Sanity TC list (was RE: [OpenStack Ussuri Upgrade Task] Call for patch review!!) In-Reply-To: References: Message-ID: Hi Frank, We will need to allocate some bandwidth to create a sanity test for this LP. Meanwhile we are following with Zhipeng to understand exactly the steps and timings we need to check Regards, Nicolae Jascanu, Ph.D. TSD Software Engineer Internet Of Things Group Galati, Romania -----Original Message----- From: Miller, Frank Sent: Tuesday, June 2, 2020 02:43 To: Jascanu, Nicolae ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: Sanity TC list (was RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!) Nicolae: Thanks for sending the TC list for sanity. The reason sanity is not seeing the stx-openstack recovery issues after a controller reboot is that TC is not currently in the sanity suite. In the 02-Host-Management testcases I see lock/unlock TCs but not TCs where each controller is rebooted and checked to make sure all the apps and pods recover after the reboot. I suggest you plan to add in this type of testcase into sanity. Frank -----Original Message----- From: Jascanu, Nicolae Sent: Wednesday, May 27, 2020 11:33 AM To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Please find below the list of sanity testcases executed: ######################### Sanity-Openstack ######################### ############# 01-Instance-From-Image.robot ########################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros and Centos instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros and Centos instances. Create Networks For Instances [Documentation] Create networks to be used to launch Cirros and Centos ... instances. Launch Instances [Documentation] Launch Cirros and Centos instances. Suspend Resume Instances [Documentation] Suspend and Resume Cirros and Centos instances. Set Error Active Flags Instances [Documentation] Set 'Error' and 'Active' flags to Cirros and Centos ... instances. Pause Unpause Instances [Documentation] Pause and Unpause Cirros and Centos instances. Stop Start Instances [Documentation] Stop and Start Cirros and Centos instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros and Centos instances. Reboot Instances [Documentation] Reboot Cirros and Centos instances. Rebuild Instances [Documentation] Rebuild Cirros and Centos instances. Resize Instances [Documentation] Resize Cirros instance. Create Flavor ${cirros_flavor_ram} ${cirros_flavor_vcpus} ... ${cirros_flavor_disk} ${cirros_flavor_name_2} Set Unset Properties Instances [Documentation] Set Unset properties of Cirros and Centos instances. Evacuate Instances From Hosts [Documentation] Evacuate all Cirros and Centos instances from computes ... or controllers. ############### 02-Instance-From-Volume.robot ###################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks For Instance [Documentation] Create networks to be used to launch Cirros ... instances. Create Volume For Instances [Documentation] Create volumes with or without properties to be used to ... to launch Cirros instances. Launch Instances [Documentation] Launch Cirros instances. Suspend Resume Instance [Documentation] Suspend and Resume Cirros instances. Set Error Active Flags Instance [Documentation] Set 'Error' and 'Active' flags to Cirros ... instance. Pause Unpause Instances [Documentation] Pause and Unpause Cirros instances. Stop Start Instances [Documentation] Stop and Start Cirros instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros instances. Reboot Instances [Documentation] Reboot Cirros instances. Rebuild Instances [Documentation] Rebuild Cirros instances. Resize Instances [Documentation] Resize Cirros instances. Set Unset Properties Instances [Documentation] Set Unset properties of Cirros instances. Evacuate Instances From Hosts [Documentation] Evacuate all Cirros instances from computes ... or controllers. ############### 03-Instance-From-Snapshot.robot ###################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks For Instance [Documentation] Create networks to be used to launch Cirros and Centos ... instances. Create Volume For Instances [Documentation] Create volumes with or without properties to be used ... to launch Cirros instances. Create Snapshot For Instance [Documentation] Create snapshots with or without properties to be used ... to launch Cirros instances. Launch Instances [Documentation] Launch Cirros instances from snapshot. Suspend Resume Instances [Documentation] Suspend and Resume Cirros instances. Set Error Active Flags Instances [Documentation] Set 'Error' and 'Active' flags to Cirros instances. Pause Unpause Instances [Documentation] Pause and Unpause Cirros instances. Stop Start Instances [Documentation] Stop and Start Cirros instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros instances. Reboot Instances [Documentation] Reboot Cirros instances. Rebuild Instances [Documentation] Rebuild Cirros instances. Resize Instances [Documentation] Resize Cirros instances. Set Unset Properties Instances [Documentation] Set Unset properties of Cirros instances. Evacuate Instances From Hosts [Documentation] Evacuate all instances from computes or ... controllers. ############### 04-Instance-From-Heat-Template.robot ######################## Create Flavors for Instance [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images for Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks for Instance [Documentation] Create networks to be used to launch Cirros ... instances. Create Instance Trough Stack [Documentation] Create a Cirros instance using a heat template ############### 05-Measurements-For-Metric.robot ################# Create Image For Metrics [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Update Image Name [Documentation] Update image name. Update Image Disk Ram Size [Documentation] Update image disk size and ram size. ########################### Sanity-Platform ########################### ############# 01-OpenStack-Pod-Healthy.robot ######################## OpenStack PODs Healthy [Documentation] Check all OpenStack pods are healthy, in Running or ... Completed state. Reapply STX OpenStack [Documentation] Re apply stx openstack application without any ... modification to helm charts. STX OpenStack Override Update Reset [Documentation] Helm override for OpenStack nova chart and reset. Kube System Services [Documentation] Check pods status and kube-system services are ... displayed. Create Check Delete POD [Documentation] Launch a POD via kubectl. ################ 02-Host-Management.robot ######################## Add Controller Host Simplex [Documentation] Try to add a new controller on a Simplex ... configuration, expect to fail. Swact Controller Host Simplex [Documentation] Try to perform a swact controller on a Simplex ... configuration, expect to fail. Lock Active Controller [Documentation] Try to perform a lock to the Active controller Lock Unlock Standby Controller [Documentation] Perform a lock/unlock to the Standby controller Lock Unlock Compute Host [Documentation] Perform a lock/unlock to the compute node Lock Unlock Storage Host [Documentation] Perform a lock/unlock to the storage node Regards, Nicolae Jascanu, Ph.D. TSD Software Engineer Internet Of Things Group Galati, Romania -----Original Message----- From: Miller, Frank Sent: Wednesday, May 27, 2020 17:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Dariush.Eslimi at windriver.com Tue Jun 2 13:21:00 2020 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Tue, 2 Jun 2020 13:21:00 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Config/DC/Flock/Upgrade Bi-weekly Meeting Message-ID: Cancelling due to PTG. All, This will not be a status meeting, please bring your questions or bring issues that requires discussions that would help you make decisions. Thanks, Dariush Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC (every 2 weeks) Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Meeting notes are at https://etherpad.openstack.org/p/stx-config_DC_flock Subproject wikis: https://wiki.openstack.org/wiki/StarlingX/Config https://wiki.openstack.org/wiki/StarlingX/DistCloud https://wiki.openstack.org/wiki/StarlingX/FlockServices -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2465 bytes Desc: not available URL: From Dan.Voiculeasa at windriver.com Tue Jun 2 13:22:59 2020 From: Dan.Voiculeasa at windriver.com (Voiculeasa, Dan) Date: Tue, 2 Jun 2020 13:22:59 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: Message-ID: Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory\ncp: cannot stat ‘>’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory", "cp: cannot stat ‘>’: No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Tue Jun 2 13:36:57 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 2 Jun 2020 13:36:57 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: Message-ID: Great thanks Voiculeasa. I already setup backup and restore, simplex. One question, for restore, currently only platform restore is enabled, correct? What restore for openstack? Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Jun 2 14:06:21 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 2 Jun 2020 14:06:21 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: Message-ID: Martin: B&R only works for platform at the moment. For openstack there are outstanding commits that have not merged. I suggest that you just focus on getting B&R for the platform to work. Frank From: Chen, Haochuan Z Sent: Tuesday, June 02, 2020 9:37 AM To: Voiculeasa, Dan ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] issue for backup and restore Great thanks Voiculeasa. I already setup backup and restore, simplex. One question, for restore, currently only platform restore is enabled, correct? What restore for openstack? Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Jun 2 15:20:50 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 2 Jun 2020 15:20:50 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Zhipeng, thank you. Based on the data below, this isn't one problem - it's multiple opportunities for performance optimizations across a number of components. Why is the host restart taking 3-4m ? Can we improve that? Etc.... Nothing here should be a gate for checking in the Ussuri code. My only question would be - do we consider the performance issues documented below to be release gating? brucej -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 2, 2020 1:48 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Tue Jun 2 15:47:00 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 2 Jun 2020 15:47:00 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Tue Jun 2 16:25:28 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 2 Jun 2020 16:25:28 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yang.liu at windriver.com Tue Jun 2 02:25:28 2020 From: yang.liu at windriver.com (Liu, Yang (YOW)) Date: Tue, 2 Jun 2020 02:25:28 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Test meeting Message-ID: Canceled for this week due to PTG. Weekly meeting on Tuesday 8AM PT / 1500 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 5239 bytes Desc: not available URL: From amy at demarco.com Tue Jun 2 18:25:37 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 2 Jun 2020 13:25:37 -0500 Subject: [Starlingx-discuss] [diversity] Hour of Healing Message-ID: The OSF Diversity and Inclusion Working Group recognizes that this is a trying time for our communities and colleagues. We would like to invite you to 'An Hour of Healing' on Thursday (June 4th at 17:30 - 18:30 UTC) where you can talk to others in a safe place. We invite you to use this time to express your feelings, or to just be able to talk to others without being judged. This session will adhere to the OSF Code of Conduct and zero tolerance for harassment policy, which means we will not be judging or condemning others (individuals or groups) inside OR outside of our immediate community. We will come together to heal, in mutually respectful dialogue, keeping in mind that while there are many different individual viewpoints, we all share pain collectively and can heal together. We will be using https://meetpad.opendev.org/PTGDiversityAndInclusion for this gathering. The OSF Diversity and Inclusion WG -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Wed Jun 3 00:54:31 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 00:54:31 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Wed Jun 3 01:18:29 2020 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 3 Jun 2020 01:18:29 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Release meeting Message-ID: Cancelling this week due to the PTG Weekly meeting on Thursday 11AM PT / 1900 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1857 bytes Desc: not available URL: From zhipengs.liu at intel.com Wed Jun 3 01:39:22 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 3 Jun 2020 01:39:22 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: BTW, https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash This crash could not be reproduced with daily build 20200516T080009Z! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 0:25 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Wed Jun 3 02:03:42 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 3 Jun 2020 02:03:42 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Wed Jun 3 02:30:35 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 2 Jun 2020 22:30:35 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 393 - Failure! Message-ID: <1562809381.1584.1591151436754.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 393 Status: Failure Timestamp: 20200603T022044Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200603T020359Z DOCKER_BUILD_ID: jenkins-master-flock-20200603T020359Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200603T020359Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Wed Jun 3 02:30:42 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 2 Jun 2020 22:30:42 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> Message-ID: <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From Frank.Miller at windriver.com Wed Jun 3 02:38:21 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 02:38:21 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed Jun 3 07:17:06 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 3 Jun 2020 03:17:06 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 131 - Still Failing! In-Reply-To: <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> References: <54924716.1571.1591054269707.JavaMail.javamailuser@localhost> <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> Message-ID: <23d26a83-ba55-933f-8979-32e5ce8e2b8f@windriver.com> Two issues 1) There was an issue with the cengn mirroring process.  Recent *.repo changes weren't being fully mirrored.  I found the root cause and corrected it.  The recent content additions are now mirrored. 2) It appears that download_mirror.sh successfully fell back to pulling rpms from upstream sources for the monolithic build, but not for the flock build.   I don't fully understand this issue yet.  Having fixed the cengn mirror, the need for the fallback has been removed, so it's harder to reproduce. On 2020-06-02 1:37 a.m., build.starlingx at gmail.com wrote: > Project: STX_build_layer_flock_master_master > Build #: 131 > Status: Still Failing > Timestamp: 20200602T053143Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200602T053143Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jun 3 07:56:47 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 3 Jun 2020 03:56:47 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Message-ID: This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version.  Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build.  It doesn't build it itself.  The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm.  It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present.  It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this.  I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2.  If yes, do we still need whatever modifications were applied to librados2-13.2.2?  Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst?  If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_layer_flock_master_master > Build #: 132 > Status: Still Failing > Timestamp: 20200603T020359Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Wed Jun 3 08:47:37 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 3 Jun 2020 08:47:37 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, For question #1, When we built openstack ussuri image which is python3 only. It needs python3-rbd and related dependency, so we add librados2-13.2.10 and related packages. For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. Shouldn’t we let the build choose local build first? Another option is moving these packages to container layer, add rpms_centos.lst in config/centos/flock/? Thanks! Zhipeng From: Scott Little Sent: 2020年6月3日 15:57 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version. Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build. It doesn't build it itself. The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm. It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm. It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this. I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. If yes, do we still need whatever modifications were applied to librados2-13.2.2? Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst? If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Jun 3 13:07:47 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 3 Jun 2020 06:07:47 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Message-ID: <40d7fc31-e1f6-a416-3815-82f90df44c18@linux.intel.com> On 6/3/20 12:56 AM, Scott Little wrote: > This was an interesting one. > Yes, indeed, great investigative work! > We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part > of the distro layer for some time. > > A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of > the flock layer. > It looks like that commit actually added both librados2-13.2.10 and 13.2.2! My bad for not catching that. I was not aware that librados2 was being build as part of Ceph, I guess this is something we should be generally aware of. That change also brought in a load of Ceph related packages (ceph-common, libcephfs2, ...), so there might be additional collisions that we don't know about yet! > Now build-iso preferres locally built packages over downloaded ones, > even if the downloaded on is of higher version.  Now that policy is open > for debate, but that is what it does. > > Monolithic build uses the lst files of all layers, but having built > librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects > librados2-13.2.2-0.el7.tis.25.x86_64.rpm over > librados2-13.2.10-0.el7.x86_64.rpm when building the iso. > > Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm > from the distro layer build.  It doesn't build it itself.  The downloads > from the two sources are lumped into a common repo, so it has no reason > to prefer the lower versioned rpm.  It selects > librados2-13.2.10-0.el7.x86_64.rpm. > Good research! This makes sense (I guess initially) > The final piece of the puzzle is the transitive list of requires for > librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that pulls > in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs > userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present.  It's > wasn't included in the recent lst file changes that added > librados2-13.2.10-0.el7.x86_64.rpm. > We do have userspace-rcu in distro, and lttng-ust is only part of the flock. It seems we have userspace-rcu-devel only in flock. So yeah this seems to be some problem here. > A flock layer build-iso should have caught this.  I suspect build-iso > was only performed on a monolithic build. > I know we probably don't have time, but it would be interesting to verify why the monolithic build not catch this and if the flock layer would actually catch it. > Open questions. > 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. > If yes, do we still need whatever modifications were applied to > librados2-13.2.2?  Do they need to be ported to librados2-13.2.10 , or > can we drop librados2 from the set of packages we have patches against? > As I mentioned above, librados2 is build as part of Ceph, so an additional question is would Ceph-13.2.2 have issues using librados2-13.2.10? Or any of the other upgraded Ceph related packages that got updated? Do we need to up-rev Ceph and build for both python2 or python3? > 2) For build-iso... should we prefer locally built packages even though > there is a higher package named in an lst?  If yes, then layered build > needs apply the local first policy accross layers. Alternatively, > perhaps drop the local first policy, but add an audit tool to detect > when a locally built package is being masked in this way. > Is this an edge case or common? Do we know what other cases like this and maybe that informs what kind of audit tool is needed. So, adding an audit tool might have caught this. The librados2 is not actually in any list as it's build as part of Ceph, it comes in as a Requires: for Ceph. The python3 update added it to the flock/rpms_centos.lst file. Yes, I ducked the local vs higher question right now, maybe knowing the answer about Ceph's usage would help and if we have this issue elsewhere will help me. Sau! > Scott > > > On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: >> Project: STX_build_layer_flock_master_master >> Build #: 132 >> Status: Still Failing >> Timestamp: 20200603T020359Z >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs >> -------------------------------------------------------------------------------- >> Parameters >> >> FULL_BUILD: false >> FORCE_BUILD: false >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Frank.Miller at windriver.com Wed Jun 3 14:11:50 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 14:11:50 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Wed Jun 3 14:28:01 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 3 Jun 2020 14:28:01 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Wed Jun 3 14:34:35 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 14:34:35 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Jun 3 14:52:37 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 3 Jun 2020 07:52:37 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Message-ID: <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> On 6/3/20 1:47 AM, Liu, ZhipengS wrote: > Hi Scott, > > For question #1, > > When we built openstack ussuri image which is python3 only. > > It needs python3-rbd and related dependency, so we add librados2-13.2.10 > and related packages. > > For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. > > Shouldn’t  we let the build choose local build first? > Following up on this we need to be careful about which we choose, as I said in the other email is this a one-off issue or something that we see more of. So maybe an audit tool would help. > Another option is moving these packages to container layer, add > rpms_centos.lst in config/centos/flock/? > I understand this option better after chatting with Zhipeng, I think this might be the best option adding the Updated Ceph / RBD related packages to the container list which will be used for the Usurri container builds but not by the platform OS. This would mean that the containers would have Ceph 13.2.10 related packages and the platform OS would be 13.2.2. Would that cause problems or stability issues? Sau! > Thanks! > > Zhipeng > > *From:*Scott Little > *Sent:* 2020年6月3日15:57 > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [build-report] > STX_build_layer_flock_master_master - Build # 132 - Still Failing! > > This was an interesting one. > > We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part > of the distro layer for some time. > > A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of > the flock layer. > > Now build-iso preferres locally built packages over downloaded ones, > even if the downloaded on is of higher version.  Now that policy is open > for debate, but that is what it does. > > Monolithic build uses the lst files of all layers, but having built > librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects > librados2-13.2.2-0.el7.tis.25.x86_64.rpm over > librados2-13.2.10-0.el7.x86_64.rpm when building the iso. > > Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm > from the distro layer build.  It doesn't build it itself.  The downloads > from the two sources are lumped into a common repo, so it has no reason > to prefer the lower versioned rpm.  It selects > librados2-13.2.10-0.el7.x86_64.rpm. > > The final piece of the puzzle is the transitive list of requires for > librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that pulls > in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs > userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't > included in the recent lst file changes that added > librados2-13.2.10-0.el7.x86_64.rpm. > > A flock layer build-iso should have caught this.  I suspect build-iso > was only performed on a monolithic build. > > Open questions. > 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. > If yes, do we still need whatever modifications were applied to > librados2-13.2.2?  Do they need to be ported to librados2-13.2.10 , or > can we drop librados2 from the set of packages we have patches against? > > 2) For build-iso... should we prefer locally built packages even though > there is a higher package named in an lst?  If yes, then layered build > needs apply the local first policy accross layers.  Alternatively, > perhaps drop the local first policy, but add an audit tool to detect > when a locally built package is being masked in this way. > > Scott > > On 2020-06-02 10:30 p.m., build.starlingx at gmail.com > wrote: > > Project: STX_build_layer_flock_master_master > > Build #: 132 > > Status: Still Failing > > Timestamp: 20200603T020359Z > > Check logs at: > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs > > -------------------------------------------------------------------------------- > > Parameters > > FULL_BUILD: false > > FORCE_BUILD: false > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From alfredo.deluca at gmail.com Wed Jun 3 19:05:03 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Wed, 3 Jun 2020 21:05:03 +0200 Subject: [Starlingx-discuss] Subcloud on a Virtual Machine Message-ID: Hi all. For testing purposes we are trying to install a subcloud on a VM (Openstack to be precise) but we get a couple of errors as below. Booting from an ISO (STX 3.0) we get this 1. ERROR: Specified installation (sda) or boot (sda) device is invalid. then I supposed the ISO is looking for a device *sda* .. so we fixed that but then another issue occurred and the error now is 2. Disk "" given in clearpart command does not exist. Now I wonder if it is possible to install that on top of a VM and also what could it the fix for the second error. Any idea/clue? Cheers -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jun 3 21:01:51 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 3 Jun 2020 17:01:51 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> Message-ID: <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> No I don't think that would work.  We can't have two versions of the same package competing for dominance within the mock build environments.  i.e. on time pkg X builds against 13.2.2, the next time against 13.2.10.  The outcome dependent on the vagaries of job scheduling, build speeds, and any other number of factors.  If you compile against 13.2.10, will you run ok vs 13.2.2.  I wouldn't want to bet on it. The build layering solution might be to throw it in it's own layer. Until we are 100% committed to build layering, we need to converge on ONE version of ceph. Scott On 2020-06-03 10:52 a.m., Saul Wold wrote: > > > On 6/3/20 1:47 AM, Liu, ZhipengS wrote: >> Hi Scott, >> >> For question #1, >> >> When we built openstack ussuri image which is python3 only. >> >> It needs python3-rbd and related dependency, so we add >> librados2-13.2.10 and related packages. >> >> For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for >> python2. >> >> Shouldn’t  we let the build choose local build first? >> > Following up on this we need to be careful about which we choose, as I > said in the other email is this a one-off issue or something that we > see more of.  So maybe an audit tool would help. > >> Another option is moving these packages to container layer, add >> rpms_centos.lst in config/centos/flock/? >> > I understand this option better after chatting with Zhipeng, I think > this might be the best option adding the Updated Ceph / RBD related > packages to the container list which will be used for the Usurri > container builds but not by the platform OS. > > This would mean that the containers would have Ceph 13.2.10 related > packages and the platform OS would be 13.2.2.  Would that cause > problems or stability issues? > > Sau! > >> Thanks! >> >> Zhipeng >> >> *From:*Scott Little >> *Sent:* 2020年6月3日15:57 >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] [build-report] >> STX_build_layer_flock_master_master - Build # 132 - Still Failing! >> >> This was an interesting one. >> >> We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as >> part of the distro layer for some time. >> >> A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst >> of the flock layer. >> >> Now build-iso preferres locally built packages over downloaded ones, >> even if the downloaded on is of higher version.  Now that policy is >> open for debate, but that is what it does. >> >> Monolithic build uses the lst files of all layers, but having built >> librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects >> librados2-13.2.2-0.el7.tis.25.x86_64.rpm over >> librados2-13.2.10-0.el7.x86_64.rpm when building the iso. >> >> Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm >> from the distro layer build.  It doesn't build it itself.  The >> downloads from the two sources are lumped into a common repo, so it >> has no reason to prefer the lower versioned rpm.  It selects >> librados2-13.2.10-0.el7.x86_64.rpm. >> >> The final piece of the puzzle is the transitive list of requires for >> librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that >> pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs >> userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's >> wasn't included in the recent lst file changes that added >> librados2-13.2.10-0.el7.x86_64.rpm. >> >> A flock layer build-iso should have caught this.  I suspect build-iso >> was only performed on a monolithic build. >> >> Open questions. >> 1) Is there a need to move to librados2-13.2.10 from >> librados2-13.2.2.  If yes, do we still need whatever modifications >> were applied to librados2-13.2.2?  Do they need to be ported to >> librados2-13.2.10 , or can we drop librados2 from the set of packages >> we have patches against? >> >> 2) For build-iso... should we prefer locally built packages even >> though there is a higher package named in an lst?  If yes, then >> layered build needs apply the local first policy accross layers.  >> Alternatively, perhaps drop the local first policy, but add an audit >> tool to detect when a locally built package is being masked in this way. >> >> Scott >> >> On 2020-06-02 10:30 p.m., build.starlingx at gmail.com >> wrote: >> >>     Project: STX_build_layer_flock_master_master >> >>     Build #: 132 >> >>     Status: Still Failing >> >>     Timestamp: 20200603T020359Z >> >>     Check logs at: >> >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs >> >> -------------------------------------------------------------------------------- >> >>     Parameters >> >>     FULL_BUILD: false >> >>     FORCE_BUILD: false >> >> >> >>     _______________________________________________ >> >>     Starlingx-discuss mailing list >> >>     Starlingx-discuss at lists.starlingx.io >> >> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Wed Jun 3 21:54:22 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 21:54:22 +0000 Subject: [Starlingx-discuss] Weekly build meeting is cancelled due to PTG this week Message-ID: FYI - we will not be meeting at our usual Thursday meeting time for the build project. Frank PL for StarlingX Build project -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Jun 3 22:08:29 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 3 Jun 2020 15:08:29 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: On 6/3/20 2:01 PM, Scott Little wrote: > No I don't think that would work.  We can't have two versions of the > same package competing for dominance within the mock build > environments.  i.e. on time pkg X builds against 13.2.2, the next time > against 13.2.10.  The outcome dependent on the vagaries of job > scheduling, build speeds, and any other number of factors.  If you > compile against 13.2.10, will you run ok vs 13.2.2.  I wouldn't want to > bet on it. > > The build layering solution might be to throw it in it's own layer. > > Until we are 100% committed to build layering, we need to converge on > ONE version of ceph. > Ok, so one option is to move to Ceph 13.2.10 or drop the existing package list update that brings in the python3 and related Ceph packages. Do we need to at least revert that commit in-order to get the build working again? We might need to spend a few minutes to hash this out tomorrow morning at the PTG. Sau! > Scott > > > On 2020-06-03 10:52 a.m., Saul Wold wrote: >> >> >> On 6/3/20 1:47 AM, Liu, ZhipengS wrote: >>> Hi Scott, >>> >>> For question #1, >>> >>> When we built openstack ussuri image which is python3 only. >>> >>> It needs python3-rbd and related dependency, so we add >>> librados2-13.2.10 and related packages. >>> >>> For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for >>> python2. >>> >>> Shouldn’t  we let the build choose local build first? >>> >> Following up on this we need to be careful about which we choose, as I >> said in the other email is this a one-off issue or something that we >> see more of.  So maybe an audit tool would help. >> >>> Another option is moving these packages to container layer, add >>> rpms_centos.lst in config/centos/flock/? >>> >> I understand this option better after chatting with Zhipeng, I think >> this might be the best option adding the Updated Ceph / RBD related >> packages to the container list which will be used for the Usurri >> container builds but not by the platform OS. >> >> This would mean that the containers would have Ceph 13.2.10 related >> packages and the platform OS would be 13.2.2.  Would that cause >> problems or stability issues? >> >> Sau! >> >>> Thanks! >>> >>> Zhipeng >>> >>> *From:*Scott Little >>> *Sent:* 2020年6月3日15:57 >>> *To:* starlingx-discuss at lists.starlingx.io >>> *Subject:* Re: [Starlingx-discuss] [build-report] >>> STX_build_layer_flock_master_master - Build # 132 - Still Failing! >>> >>> This was an interesting one. >>> >>> We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as >>> part of the distro layer for some time. >>> >>> A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst >>> of the flock layer. >>> >>> Now build-iso preferres locally built packages over downloaded ones, >>> even if the downloaded on is of higher version.  Now that policy is >>> open for debate, but that is what it does. >>> >>> Monolithic build uses the lst files of all layers, but having built >>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects >>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm over >>> librados2-13.2.10-0.el7.x86_64.rpm when building the iso. >>> >>> Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm >>> from the distro layer build.  It doesn't build it itself.  The >>> downloads from the two sources are lumped into a common repo, so it >>> has no reason to prefer the lower versioned rpm.  It selects >>> librados2-13.2.10-0.el7.x86_64.rpm. >>> >>> The final piece of the puzzle is the transitive list of requires for >>> librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that >>> pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs >>> userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's >>> wasn't included in the recent lst file changes that added >>> librados2-13.2.10-0.el7.x86_64.rpm. >>> >>> A flock layer build-iso should have caught this.  I suspect build-iso >>> was only performed on a monolithic build. >>> >>> Open questions. >>> 1) Is there a need to move to librados2-13.2.10 from >>> librados2-13.2.2.  If yes, do we still need whatever modifications >>> were applied to librados2-13.2.2?  Do they need to be ported to >>> librados2-13.2.10 , or can we drop librados2 from the set of packages >>> we have patches against? >>> >>> 2) For build-iso... should we prefer locally built packages even >>> though there is a higher package named in an lst?  If yes, then >>> layered build needs apply the local first policy accross layers. >>> Alternatively, perhaps drop the local first policy, but add an audit >>> tool to detect when a locally built package is being masked in this way. >>> >>> Scott >>> >>> On 2020-06-02 10:30 p.m., build.starlingx at gmail.com >>> wrote: >>> >>>     Project: STX_build_layer_flock_master_master >>> >>>     Build #: 132 >>> >>>     Status: Still Failing >>> >>>     Timestamp: 20200603T020359Z >>> >>>     Check logs at: >>> >>> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs >>> >>> >>> -------------------------------------------------------------------------------- >>> >>> >>>     Parameters >>> >>>     FULL_BUILD: false >>> >>>     FORCE_BUILD: false >>> >>> >>> >>>     _______________________________________________ >>> >>>     Starlingx-discuss mailing list >>> >>>     Starlingx-discuss at lists.starlingx.io >>> >>> >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Thu Jun 4 02:27:07 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 3 Jun 2020 22:27:07 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 396 - Failure! Message-ID: <269577612.1591.1591237628513.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 396 Status: Failure Timestamp: 20200604T021722Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200604T020352Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200604T020352Z DOCKER_BUILD_ID: jenkins-master-flock-20200604T020352Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200604T020352Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200604T020352Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Thu Jun 4 02:27:10 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 3 Jun 2020 22:27:10 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 133 - Still Failing! In-Reply-To: <139468880.1585.1591151437284.JavaMail.javamailuser@localhost> References: <139468880.1585.1591151437284.JavaMail.javamailuser@localhost> Message-ID: <461822395.1594.1591237630931.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 133 Status: Still Failing Timestamp: 20200604T020352Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200604T020352Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From mingyuan.qi at intel.com Thu Jun 4 07:34:24 2020 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Thu, 4 Jun 2020 07:34:24 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management Message-ID: Hi, In Tuesday's PTG, I have introduced the proposal of a sub-project for small node management: Hummingbird. I put the document link here[0] for community members who are interested in the detail info but haven't joined Tuesday PTG. The target of Hummingbird project is to bring the ability of edge node(small node) management to StarlingX. The project gets the name "Hummingbird" from hummingbird's characteristics: Tiny, stably hovering and echo to "starling". Here are 3 reasons that having Hummingbird as a sub-project: 1. A bunch of technical areas such as containerization, networking, storage and flock services are converged in Hummingbird. The implementation of Hummingbird needs to be well coordinated among these technologies. 2. Hummingbird will be developed in a long term across multiple releases. The development pace of delivering features for small node management could be well discussed in the form of a sub-project. 3. Current sub-projects are organized in fundamental technologies. A sub-project based on individual functionality comes from a different perspective, for example like the projects in Openstack. It will enhance the collaboration of the members in different technology background in the community. All above aim to bring the small node management to StarlingX in a well scheduled and quality ensured way. It's a brand new proposal and I hope you could find something interesting in the doc. Welcome your questions and inputs. [0] https://drive.google.com/file/d/1VpglICCzI_PSGdCC12Y7MzhSE8cAolTM/view?usp=sharing Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Jun 4 14:19:05 2020 From: scott.little at windriver.com (Scott Little) Date: Thu, 4 Jun 2020 10:19:05 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: I see https://review.opendev.org/#/c/733426/9 has been posted With this update, layered builds should pass, and would look like this ... * Flock and iso builds will use 13.2.2. * All container builds uses 13.2.10. o Do we want 13.2.10 in ALL containers? * Any ceph dependent rpms from distro/flock builds that make it into a container (if any), will have been compiled against 13.2.2, but will run against 13.2.10.  I'm more comfortable with a increment to the patch level than a decrement.  I think we can live with this until we can move to 13.2.10 universally. Monolithic will continue to build, but will remain confused ... All lst files, including container layer lsts, are downloaded before any package is built. Most if not all packages that depend on ceph will build against 13.2.10 as mock/yum does not understand the 'prefer local'. build-iso will use 'prefer local' and ship with 13.2.2. The implications of which is unclear. One hopes that the interface is stable when the version diff is only at the patch level, but I never like to see shipped version LOWER than the complied against version. On 2020-06-03 6:08 p.m., Saul Wold wrote: > > > On 6/3/20 2:01 PM, Scott Little wrote: >> No I don't think that would work.  We can't have two versions of the >> same package competing for dominance within the mock build >> environments.  i.e. on time pkg X builds against 13.2.2, the next >> time against 13.2.10.  The outcome dependent on the vagaries of job >> scheduling, build speeds, and any other number of factors.  If you >> compile against 13.2.10, will you run ok vs 13.2.2.  I wouldn't want >> to bet on it. >> >> The build layering solution might be to throw it in it's own layer. >> >> Until we are 100% committed to build layering, we need to converge on >> ONE version of ceph. >> > Ok, so one option is to move to Ceph 13.2.10 or drop the existing > package list update that brings in the python3 and related Ceph packages. > > Do we need to at least revert that commit in-order to get the build > working again? > > We might need to spend a few minutes to hash this out tomorrow morning > at the PTG. > > Sau! > >> Scott >> >> >> On 2020-06-03 10:52 a.m., Saul Wold wrote: >>> >>> >>> On 6/3/20 1:47 AM, Liu, ZhipengS wrote: >>>> Hi Scott, >>>> >>>> For question #1, >>>> >>>> When we built openstack ussuri image which is python3 only. >>>> >>>> It needs python3-rbd and related dependency, so we add >>>> librados2-13.2.10 and related packages. >>>> >>>> For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for >>>> python2. >>>> >>>> Shouldn’t  we let the build choose local build first? >>>> >>> Following up on this we need to be careful about which we choose, as >>> I said in the other email is this a one-off issue or something that >>> we see more of.  So maybe an audit tool would help. >>> >>>> Another option is moving these packages to container layer, add >>>> rpms_centos.lst in config/centos/flock/? >>>> >>> I understand this option better after chatting with Zhipeng, I think >>> this might be the best option adding the Updated Ceph / RBD related >>> packages to the container list which will be used for the Usurri >>> container builds but not by the platform OS. >>> >>> This would mean that the containers would have Ceph 13.2.10 related >>> packages and the platform OS would be 13.2.2.  Would that cause >>> problems or stability issues? >>> >>> Sau! >>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> *From:*Scott Little >>>> *Sent:* 2020年6月3日15:57 >>>> *To:* starlingx-discuss at lists.starlingx.io >>>> *Subject:* Re: [Starlingx-discuss] [build-report] >>>> STX_build_layer_flock_master_master - Build # 132 - Still Failing! >>>> >>>> This was an interesting one. >>>> >>>> We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as >>>> part of the distro layer for some time. >>>> >>>> A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst >>>> of the flock layer. >>>> >>>> Now build-iso preferres locally built packages over downloaded >>>> ones, even if the downloaded on is of higher version.  Now that >>>> policy is open for debate, but that is what it does. >>>> >>>> Monolithic build uses the lst files of all layers, but having built >>>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects >>>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm over >>>> librados2-13.2.10-0.el7.x86_64.rpm when building the iso. >>>> >>>> Flock layer build, downloads >>>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer >>>> build.  It doesn't build it itself.  The downloads from the two >>>> sources are lumped into a common repo, so it has no reason to >>>> prefer the lower versioned rpm.  It selects >>>> librados2-13.2.10-0.el7.x86_64.rpm. >>>> >>>> The final piece of the puzzle is the transitive list of requires >>>> for librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency >>>> that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn >>>> needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. >>>> It's wasn't included in the recent lst file changes that added >>>> librados2-13.2.10-0.el7.x86_64.rpm. >>>> >>>> A flock layer build-iso should have caught this.  I suspect >>>> build-iso was only performed on a monolithic build. >>>> >>>> Open questions. >>>> 1) Is there a need to move to librados2-13.2.10 from >>>> librados2-13.2.2.  If yes, do we still need whatever modifications >>>> were applied to librados2-13.2.2?  Do they need to be ported to >>>> librados2-13.2.10 , or can we drop librados2 from the set of >>>> packages we have patches against? >>>> >>>> 2) For build-iso... should we prefer locally built packages even >>>> though there is a higher package named in an lst?  If yes, then >>>> layered build needs apply the local first policy accross layers. >>>> Alternatively, perhaps drop the local first policy, but add an >>>> audit tool to detect when a locally built package is being masked >>>> in this way. >>>> >>>> Scott >>>> >>>> On 2020-06-02 10:30 p.m., build.starlingx at gmail.com >>>> wrote: >>>> >>>>     Project: STX_build_layer_flock_master_master >>>> >>>>     Build #: 132 >>>> >>>>     Status: Still Failing >>>> >>>>     Timestamp: 20200603T020359Z >>>> >>>>     Check logs at: >>>> >>>> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>>     Parameters >>>> >>>>     FULL_BUILD: false >>>> >>>>     FORCE_BUILD: false >>>> >>>> >>>> >>>>     _______________________________________________ >>>> >>>>     Starlingx-discuss mailing list >>>> >>>>     Starlingx-discuss at lists.starlingx.io >>>> >>>> >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Thu Jun 4 14:36:06 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 4 Jun 2020 14:36:06 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: Hi Scott, For our OpenStack upgrade case, we may have one more option that is not adding this ceph 13.2.10 repo to local build repo folder. Instead, we add this ceph repo as a parameter when we run build-stx-base.sh. Then this repo only used by OpenStack build. We will verify it tomorrow. Thanks! Zhipeng From: Scott Little Sent: 2020年6月4日 22:19 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! I see https://review.opendev.org/#/c/733426/9 has been posted With this update, layered builds should pass, and would look like this ... * Flock and iso builds will use 13.2.2. * All container builds uses 13.2.10. * Do we want 13.2.10 in ALL containers? * Any ceph dependent rpms from distro/flock builds that make it into a container (if any), will have been compiled against 13.2.2, but will run against 13.2.10. I'm more comfortable with a increment to the patch level than a decrement. I think we can live with this until we can move to 13.2.10 universally. Monolithic will continue to build, but will remain confused ... All lst files, including container layer lsts, are downloaded before any package is built. Most if not all packages that depend on ceph will build against 13.2.10 as mock/yum does not understand the 'prefer local'. build-iso will use 'prefer local' and ship with 13.2.2. The implications of which is unclear. One hopes that the interface is stable when the version diff is only at the patch level, but I never like to see shipped version LOWER than the complied against version. On 2020-06-03 6:08 p.m., Saul Wold wrote: On 6/3/20 2:01 PM, Scott Little wrote: No I don't think that would work. We can't have two versions of the same package competing for dominance within the mock build environments. i.e. on time pkg X builds against 13.2.2, the next time against 13.2.10. The outcome dependent on the vagaries of job scheduling, build speeds, and any other number of factors. If you compile against 13.2.10, will you run ok vs 13.2.2. I wouldn't want to bet on it. The build layering solution might be to throw it in it's own layer. Until we are 100% committed to build layering, we need to converge on ONE version of ceph. Ok, so one option is to move to Ceph 13.2.10 or drop the existing package list update that brings in the python3 and related Ceph packages. Do we need to at least revert that commit in-order to get the build working again? We might need to spend a few minutes to hash this out tomorrow morning at the PTG. Sau! Scott On 2020-06-03 10:52 a.m., Saul Wold wrote: On 6/3/20 1:47 AM, Liu, ZhipengS wrote: Hi Scott, For question #1, When we built openstack ussuri image which is python3 only. It needs python3-rbd and related dependency, so we add librados2-13.2.10 and related packages. For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. Shouldn’t we let the build choose local build first? Following up on this we need to be careful about which we choose, as I said in the other email is this a one-off issue or something that we see more of. So maybe an audit tool would help. Another option is moving these packages to container layer, add rpms_centos.lst in config/centos/flock/? I understand this option better after chatting with Zhipeng, I think this might be the best option adding the Updated Ceph / RBD related packages to the container list which will be used for the Usurri container builds but not by the platform OS. This would mean that the containers would have Ceph 13.2.10 related packages and the platform OS would be 13.2.2. Would that cause problems or stability issues? Sau! Thanks! Zhipeng *From:*Scott Little *Sent:* 2020年6月3日15:57 *To:* starlingx-discuss at lists.starlingx.io *Subject:* Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version. Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build. It doesn't build it itself. The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm. It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm. It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this. I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. If yes, do we still need whatever modifications were applied to librados2-13.2.2? Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst? If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jun 4 14:40:20 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 4 Jun 2020 14:40:20 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management In-Reply-To: References: Message-ID: <20200604144020.rqarmwxhzxngoj2v@yuggoth.org> On 2020-06-04 07:34:24 +0000 (+0000), Qi, Mingyuan wrote: > In Tuesday's PTG, I have introduced the proposal of a sub-project > for small node management: Hummingbird. [...] You may want to take care that it's not confused with https://opendev.org/openstack/swift/src/branch/feature/hummingbird/go (a reimplementation of OpenStack Swift's object-server in golang), but since that effort hasn't seen any activity in several years it's probable that not many people remember it anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aj at suse.com Thu Jun 4 15:55:10 2020 From: aj at suse.com (Andreas Jaeger) Date: Thu, 4 Jun 2020 17:55:10 +0200 Subject: [Starlingx-discuss] Fwd: [docs][all] Important changes in recent openstackdocstheme updates In-Reply-To: <67de416d-8881-66f5-29d9-29069290e354@suse.com> References: <67de416d-8881-66f5-29d9-29069290e354@suse.com> Message-ID: <4b09df96-8a13-3acc-247f-8806b03016f7@suse.com> I pushed changes for all starlingx repos that use openstackdocstheme to update to newer version, see the attached email for a longer explanation that I send to the openstack list. Full set of changes is: https://review.opendev.org/#/q/topic:reno-openstackdocstheme+is:open+projects:starlingx If there are any questions, please reach out to me - otherwise, happy reviewing ;) Andreas -------- Forwarded Message -------- Subject: [docs][all] Important changes in recent openstackdocstheme updates Date: Wed, 20 May 2020 17:40:07 +0200 From: Andreas Jaeger Organization: SUSE Software Solutions Germany GmbH, Nuernberg; GF: Felix Imendörffer; HRB 247165 (AG München) To: openstack-discuss at lists.openstack.org CC: Stephen Finucane A couple of changes recently merged into openstackdocstheme to fix problems reported. These had some surprises in it and we'd like to inform you about the changes: * Config options are now prefixed with openstackdocs_, the old names will be removed in a future release * The 'project' config option is now only respected (and displayed in the left menu) if 'openstackdocs_auto_name = False' is set. By default, the theme uses the package name (from setup.cfg) * The HTML files show the version number by default (with exception of releasenotes and api docs) calculated from git. If you want to use your own version number or disable it, set 'openstackdocs_auto_version = False' and manually configure the 'version' and 'release' options. * Previously, the theme always used 'pygments_style = "native"' and overrode the setting of 'sphinx' that many repos have. Now the setting is respected. For a few repos this lead to unreadable code snippets. If you see this or want to go back to the previous theme, configure 'pygments_style = "native"'. * Many projects have written PDF documents. openstackdocstheme can now optionally link to them. Set 'openstackdocs_pdf_link' to True to show the icon with path. Note that the PDF file is placed on docs.openstack.org in the top of the html files while in check/gate it's in a separate PDF folder. Thus, the site preview will show in check/gate a broken link - but it works fine, check [2]. * Both reno (since version 3.1.0) and openstackdocsstheme are now declared parallel safe, the CI jobs automatically build releasenotes in parallel [1]. You can modify your local tox job to do this by adding the '-j auto' parameter to your 'sphinx-build' invocation. We're releasing openstackdocstheme version 2.2.1 soon with two further fixes: * PDF documents will now show the version number like html document, no need to configure versions in conf.py for this anymore [3]. * small bug fix (if you set auto_name = False in doc/source/conf.py, this hit so far 5 repos)[4]. Everything is documented in the documentation of openstackdocstheme [2]. If there are any questions, best ask in #openstack-oslo. Andreas has started pushing changes to update projects with topic:reno-openstackdocstheme. Hope that's all for Victoria on the openstackdocstheme, Stephen and Andreas [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014902.html [2] https://docs.openstack.org/openstackdocstheme/ [3] https://review.opendev.org/729554 [4] https://review.opendev.org/729031 -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From bruce.e.jones at intel.com Thu Jun 4 17:42:27 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 4 Jun 2020 17:42:27 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management In-Reply-To: References: Message-ID: Mingyuan, thank you for bringing this proposal forward. I'd like to explore the idea of creating a sub-project for this. Do you have an estimate as to which repos the project will be working in? Are there new repos to be created? Will the changes land in other sub-project areas? If so, which? If we can figure out where the code lands, that would help us figure out which existing sub-project (if any) should be the home for this code, or if a new sub-project is needed. brucej From: Qi, Mingyuan Sent: Thursday, June 4, 2020 12:34 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Hummingbird: A project for small node management Hi, In Tuesday's PTG, I have introduced the proposal of a sub-project for small node management: Hummingbird. I put the document link here[0] for community members who are interested in the detail info but haven't joined Tuesday PTG. The target of Hummingbird project is to bring the ability of edge node(small node) management to StarlingX. The project gets the name "Hummingbird" from hummingbird's characteristics: Tiny, stably hovering and echo to "starling". Here are 3 reasons that having Hummingbird as a sub-project: 1. A bunch of technical areas such as containerization, networking, storage and flock services are converged in Hummingbird. The implementation of Hummingbird needs to be well coordinated among these technologies. 2. Hummingbird will be developed in a long term across multiple releases. The development pace of delivering features for small node management could be well discussed in the form of a sub-project. 3. Current sub-projects are organized in fundamental technologies. A sub-project based on individual functionality comes from a different perspective, for example like the projects in Openstack. It will enhance the collaboration of the members in different technology background in the community. All above aim to bring the small node management to StarlingX in a well scheduled and quality ensured way. It's a brand new proposal and I hope you could find something interesting in the doc. Welcome your questions and inputs. [0] https://drive.google.com/file/d/1VpglICCzI_PSGdCC12Y7MzhSE8cAolTM/view?usp=sharing Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From maryx.camp at intel.com Thu Jun 4 19:49:37 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 4 Jun 2020 19:49:37 +0000 Subject: [Starlingx-discuss] Fwd: [docs][all] Important changes in recent openstackdocstheme updates In-Reply-To: <4b09df96-8a13-3acc-247f-8806b03016f7@suse.com> References: <67de416d-8881-66f5-29d9-29069290e354@suse.com> <4b09df96-8a13-3acc-247f-8806b03016f7@suse.com> Message-ID: Thanks Andreas! I have no experience with theme file updates, your help with the StarlingX docs is much appreciated. thanks again, Mary Camp PTIGlobal Technical Writer | maryx.camp at intel.com -----Original Message----- From: Andreas Jaeger Sent: Thursday, June 4, 2020 11:55 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Fwd: [docs][all] Important changes in recent openstackdocstheme updates I pushed changes for all starlingx repos that use openstackdocstheme to update to newer version, see the attached email for a longer explanation that I send to the openstack list. Full set of changes is: https://review.opendev.org/#/q/topic:reno-openstackdocstheme+is:open+projects:starlingx If there are any questions, please reach out to me - otherwise, happy reviewing ;) Andreas -------- Forwarded Message -------- Subject: [docs][all] Important changes in recent openstackdocstheme updates Date: Wed, 20 May 2020 17:40:07 +0200 From: Andreas Jaeger Organization: SUSE Software Solutions Germany GmbH, Nuernberg; GF: Felix Imendörffer; HRB 247165 (AG München) To: openstack-discuss at lists.openstack.org CC: Stephen Finucane A couple of changes recently merged into openstackdocstheme to fix problems reported. These had some surprises in it and we'd like to inform you about the changes: * Config options are now prefixed with openstackdocs_, the old names will be removed in a future release * The 'project' config option is now only respected (and displayed in the left menu) if 'openstackdocs_auto_name = False' is set. By default, the theme uses the package name (from setup.cfg) * The HTML files show the version number by default (with exception of releasenotes and api docs) calculated from git. If you want to use your own version number or disable it, set 'openstackdocs_auto_version = False' and manually configure the 'version' and 'release' options. * Previously, the theme always used 'pygments_style = "native"' and overrode the setting of 'sphinx' that many repos have. Now the setting is respected. For a few repos this lead to unreadable code snippets. If you see this or want to go back to the previous theme, configure 'pygments_style = "native"'. * Many projects have written PDF documents. openstackdocstheme can now optionally link to them. Set 'openstackdocs_pdf_link' to True to show the icon with path. Note that the PDF file is placed on docs.openstack.org in the top of the html files while in check/gate it's in a separate PDF folder. Thus, the site preview will show in check/gate a broken link - but it works fine, check [2]. * Both reno (since version 3.1.0) and openstackdocsstheme are now declared parallel safe, the CI jobs automatically build releasenotes in parallel [1]. You can modify your local tox job to do this by adding the '-j auto' parameter to your 'sphinx-build' invocation. We're releasing openstackdocstheme version 2.2.1 soon with two further fixes: * PDF documents will now show the version number like html document, no need to configure versions in conf.py for this anymore [3]. * small bug fix (if you set auto_name = False in doc/source/conf.py, this hit so far 5 repos)[4]. Everything is documented in the documentation of openstackdocstheme [2]. If there are any questions, best ask in #openstack-oslo. Andreas has started pushing changes to update projects with topic:reno-openstackdocstheme. Hope that's all for Victoria on the openstackdocstheme, Stephen and Andreas [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014902.html [2] https://docs.openstack.org/openstackdocstheme/ [3] https://review.opendev.org/729554 [4] https://review.opendev.org/729031 -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Fri Jun 5 02:23:55 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 4 Jun 2020 22:23:55 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 399 - Failure! Message-ID: <226828596.1601.1591323835965.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 399 Status: Failure Timestamp: 20200605T021358Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200605T020038Z DOCKER_BUILD_ID: jenkins-master-flock-20200605T020038Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200605T020038Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Fri Jun 5 02:23:57 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 4 Jun 2020 22:23:57 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 134 - Still Failing! In-Reply-To: <1888841825.1592.1591237629065.JavaMail.javamailuser@localhost> References: <1888841825.1592.1591237629065.JavaMail.javamailuser@localhost> Message-ID: <683945570.1604.1591323838250.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 134 Status: Still Failing Timestamp: 20200605T020038Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From zhipengs.liu at intel.com Fri Jun 5 06:36:14 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 5 Jun 2020 06:36:14 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank, As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. My test step is: 1) ssh to standby controller and sudo reboot -f for it. 2) sudo reboot -f for activated controller All pods can resume after a while. However, I could reproduce this issue with DB 20200516T080009Z. From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. It includes below 2 patches which fixed this stability issue. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death [1] https://bugs.launchpad.net/starlingx/+bug/1881899 [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:35 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From mingyuan.qi at intel.com Fri Jun 5 07:13:57 2020 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Fri, 5 Jun 2020 07:13:57 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management In-Reply-To: References: Message-ID: Bruce, Thanks for you input, I've mapped the Hummingbird's components to repos as well as existing sub-projects below: Components of HB Related sub-project Landed in repos New personality Flock services project config Networking Networking project/ Security project integ Provisioning Containers project ansible-playbook Management Flock services project/ Containers project config/metal/fault Storage Non-openstack project TBD App orchestration Containers project TBD Dist-cloud collaboration Distributed cloud project distcloud As you can see, the components will be landed in multiple repos across multiple sub-projects. Mingyuan From: Jones, Bruce E Sent: Friday, June 5, 2020 1:42 To: Qi, Mingyuan ; starlingx-discuss at lists.starlingx.io Subject: RE: Hummingbird: A project for small node management Mingyuan, thank you for bringing this proposal forward. I'd like to explore the idea of creating a sub-project for this. Do you have an estimate as to which repos the project will be working in? Are there new repos to be created? Will the changes land in other sub-project areas? If so, which? If we can figure out where the code lands, that would help us figure out which existing sub-project (if any) should be the home for this code, or if a new sub-project is needed. brucej From: Qi, Mingyuan > Sent: Thursday, June 4, 2020 12:34 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Hummingbird: A project for small node management Hi, In Tuesday's PTG, I have introduced the proposal of a sub-project for small node management: Hummingbird. I put the document link here[0] for community members who are interested in the detail info but haven't joined Tuesday PTG. The target of Hummingbird project is to bring the ability of edge node(small node) management to StarlingX. The project gets the name "Hummingbird" from hummingbird's characteristics: Tiny, stably hovering and echo to "starling". Here are 3 reasons that having Hummingbird as a sub-project: 1. A bunch of technical areas such as containerization, networking, storage and flock services are converged in Hummingbird. The implementation of Hummingbird needs to be well coordinated among these technologies. 2. Hummingbird will be developed in a long term across multiple releases. The development pace of delivering features for small node management could be well discussed in the form of a sub-project. 3. Current sub-projects are organized in fundamental technologies. A sub-project based on individual functionality comes from a different perspective, for example like the projects in Openstack. It will enhance the collaboration of the members in different technology background in the community. All above aim to bring the small node management to StarlingX in a well scheduled and quality ensured way. It's a brand new proposal and I hope you could find something interesting in the doc. Welcome your questions and inputs. [0] https://drive.google.com/file/d/1VpglICCzI_PSGdCC12Y7MzhSE8cAolTM/view?usp=sharing Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Fri Jun 5 14:31:39 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 5 Jun 2020 14:31:39 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Zhipeng: This looks promising. Your theory is that the 2 openstack-helm-infra patches will fix the mariadb recovery issues. These 2 patches were merged in the openstack-helm-infra project in January and February of 2020. What would be good to know is what broke mariadb recovery between April of 2019 when Chris Friesen finished up his story [1] and our current loads today. The most likely explanation is the upversion of Train or the upversion to openstack-helm-infra done in November 2019 introduced the mariadb recovery issues. And then the openstack-helm folks found and fixed the issue earlier in 2020. If we had more time the preferred approach would be to merge just the openstack-helm-infra changes first to prove they address mariadb recovery and then in a separate commit merge Ussuri. But since you have validated that mariadb recovers with your Ussuri branch and this branch has these openstack-helm commits, I support letting Ussuri merge into stx.4.0. Frank [1] https://storyboard.openstack.org/#!/story/2004712 -----Original Message----- From: Liu, ZhipengS Sent: Friday, June 05, 2020 2:36 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. My test step is: 1) ssh to standby controller and sudo reboot -f for it. 2) sudo reboot -f for activated controller All pods can resume after a while. However, I could reproduce this issue with DB 20200516T080009Z. From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. It includes below 2 patches which fixed this stability issue. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death [1] https://bugs.launchpad.net/starlingx/+bug/1881899 [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:35 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Fri Jun 5 15:59:27 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 5 Jun 2020 15:59:27 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management In-Reply-To: References: Message-ID: Mingyuan, thank you. I've put this on the agenda for the next TSC call. brucej From: Qi, Mingyuan Sent: Friday, June 5, 2020 12:14 AM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: RE: Hummingbird: A project for small node management Bruce, Thanks for you input, I've mapped the Hummingbird's components to repos as well as existing sub-projects below: Components of HB Related sub-project Landed in repos New personality Flock services project config Networking Networking project/ Security project integ Provisioning Containers project ansible-playbook Management Flock services project/ Containers project config/metal/fault Storage Non-openstack project TBD App orchestration Containers project TBD Dist-cloud collaboration Distributed cloud project distcloud As you can see, the components will be landed in multiple repos across multiple sub-projects. Mingyuan From: Jones, Bruce E > Sent: Friday, June 5, 2020 1:42 To: Qi, Mingyuan >; starlingx-discuss at lists.starlingx.io Subject: RE: Hummingbird: A project for small node management Mingyuan, thank you for bringing this proposal forward. I'd like to explore the idea of creating a sub-project for this. Do you have an estimate as to which repos the project will be working in? Are there new repos to be created? Will the changes land in other sub-project areas? If so, which? If we can figure out where the code lands, that would help us figure out which existing sub-project (if any) should be the home for this code, or if a new sub-project is needed. brucej From: Qi, Mingyuan > Sent: Thursday, June 4, 2020 12:34 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Hummingbird: A project for small node management Hi, In Tuesday's PTG, I have introduced the proposal of a sub-project for small node management: Hummingbird. I put the document link here[0] for community members who are interested in the detail info but haven't joined Tuesday PTG. The target of Hummingbird project is to bring the ability of edge node(small node) management to StarlingX. The project gets the name "Hummingbird" from hummingbird's characteristics: Tiny, stably hovering and echo to "starling". Here are 3 reasons that having Hummingbird as a sub-project: 1. A bunch of technical areas such as containerization, networking, storage and flock services are converged in Hummingbird. The implementation of Hummingbird needs to be well coordinated among these technologies. 2. Hummingbird will be developed in a long term across multiple releases. The development pace of delivering features for small node management could be well discussed in the form of a sub-project. 3. Current sub-projects are organized in fundamental technologies. A sub-project based on individual functionality comes from a different perspective, for example like the projects in Openstack. It will enhance the collaboration of the members in different technology background in the community. All above aim to bring the small node management to StarlingX in a well scheduled and quality ensured way. It's a brand new proposal and I hope you could find something interesting in the doc. Welcome your questions and inputs. [0] https://drive.google.com/file/d/1VpglICCzI_PSGdCC12Y7MzhSE8cAolTM/view?usp=sharing Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Jun 5 17:39:36 2020 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 5 Jun 2020 10:39:36 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 399 - Failure! In-Reply-To: <226828596.1601.1591323835965.JavaMail.javamailuser@localhost> References: <226828596.1601.1591323835965.JavaMail.javamailuser@localhost> Message-ID: Is anyone looking into this build failure? Is this still related to the python3 packages and multiple versions of packages? If so, what's the next steps to resolve this? We have not had a successful build this week! Thanks Sau! On 6/4/20 7:23 PM, build.starlingx at gmail.com wrote: > Project: STX_build_pre_installer_layered > Build #: 399 > Status: Failure > Timestamp: 20200605T021358Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200605T020038Z > DOCKER_BUILD_ID: jenkins-master-flock-20200605T020038Z-builder > OS: centos > MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs > FULL_BUILD: false > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200605T020038Z/logs > MASTER_JOB_NAME: STX_build_layer_flock_master_master > LAYER: flock > MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock > BUILD_ISO: true > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From zhipengs.liu at intel.com Sat Jun 6 01:30:00 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Sat, 6 Jun 2020 01:30:00 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: Hi Scott, We have updated the patch below as you see and fixed your comment as well, thanks! https://review.opendev.org/#/c/733426/ It has been verified by Chengde! Many thanks!! After this patch get merged, could you do me a favor to cherry pick below patches to check if OpenStack images build can be triggered successfully by cengn script? (glance, cinder, nova, horizon) https://review.opendev.org/#/c/712880/ Modify build-tools and stable-wheels for Ussuri upgrading https://review.opendev.org/#/c/712862/ Update openstack docker images for stable/ussuri You might need add below repo in your build script. --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ Thanks a lot! Zhipeng From: Liu, ZhipengS Sent: 2020年6月4日 22:36 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! Hi Scott, For our OpenStack upgrade case, we may have one more option that is not adding this ceph 13.2.10 repo to local build repo folder. Instead, we add this ceph repo as a parameter when we run build-stx-base.sh. Then this repo only used by OpenStack build. We will verify it tomorrow. Thanks! Zhipeng From: Scott Little > Sent: 2020年6月4日 22:19 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! I see https://review.opendev.org/#/c/733426/9 has been posted With this update, layered builds should pass, and would look like this ... * Flock and iso builds will use 13.2.2. * All container builds uses 13.2.10. * Do we want 13.2.10 in ALL containers? * Any ceph dependent rpms from distro/flock builds that make it into a container (if any), will have been compiled against 13.2.2, but will run against 13.2.10. I'm more comfortable with a increment to the patch level than a decrement. I think we can live with this until we can move to 13.2.10 universally. Monolithic will continue to build, but will remain confused ... All lst files, including container layer lsts, are downloaded before any package is built. Most if not all packages that depend on ceph will build against 13.2.10 as mock/yum does not understand the 'prefer local'. build-iso will use 'prefer local' and ship with 13.2.2. The implications of which is unclear. One hopes that the interface is stable when the version diff is only at the patch level, but I never like to see shipped version LOWER than the complied against version. On 2020-06-03 6:08 p.m., Saul Wold wrote: On 6/3/20 2:01 PM, Scott Little wrote: No I don't think that would work. We can't have two versions of the same package competing for dominance within the mock build environments. i.e. on time pkg X builds against 13.2.2, the next time against 13.2.10. The outcome dependent on the vagaries of job scheduling, build speeds, and any other number of factors. If you compile against 13.2.10, will you run ok vs 13.2.2. I wouldn't want to bet on it. The build layering solution might be to throw it in it's own layer. Until we are 100% committed to build layering, we need to converge on ONE version of ceph. Ok, so one option is to move to Ceph 13.2.10 or drop the existing package list update that brings in the python3 and related Ceph packages. Do we need to at least revert that commit in-order to get the build working again? We might need to spend a few minutes to hash this out tomorrow morning at the PTG. Sau! Scott On 2020-06-03 10:52 a.m., Saul Wold wrote: On 6/3/20 1:47 AM, Liu, ZhipengS wrote: Hi Scott, For question #1, When we built openstack ussuri image which is python3 only. It needs python3-rbd and related dependency, so we add librados2-13.2.10 and related packages. For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. Shouldn’t we let the build choose local build first? Following up on this we need to be careful about which we choose, as I said in the other email is this a one-off issue or something that we see more of. So maybe an audit tool would help. Another option is moving these packages to container layer, add rpms_centos.lst in config/centos/flock/? I understand this option better after chatting with Zhipeng, I think this might be the best option adding the Updated Ceph / RBD related packages to the container list which will be used for the Usurri container builds but not by the platform OS. This would mean that the containers would have Ceph 13.2.10 related packages and the platform OS would be 13.2.2. Would that cause problems or stability issues? Sau! Thanks! Zhipeng *From:*Scott Little *Sent:* 2020年6月3日15:57 *To:* starlingx-discuss at lists.starlingx.io *Subject:* Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version. Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build. It doesn't build it itself. The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm. It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm. It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this. I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. If yes, do we still need whatever modifications were applied to librados2-13.2.2? Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst? If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sat Jun 6 01:58:13 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 5 Jun 2020 21:58:13 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 402 - Failure! Message-ID: <695418467.1608.1591408694247.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 402 Status: Failure Timestamp: 20200606T014803Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200606T013408Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200606T013408Z DOCKER_BUILD_ID: jenkins-master-flock-20200606T013408Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200606T013408Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200606T013408Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Sat Jun 6 01:58:15 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 5 Jun 2020 21:58:15 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 135 - Still Failing! In-Reply-To: <76095783.1602.1591323836489.JavaMail.javamailuser@localhost> References: <76095783.1602.1591323836489.JavaMail.javamailuser@localhost> Message-ID: <109858926.1611.1591408696472.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 135 Status: Still Failing Timestamp: 20200606T013408Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200606T013408Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Sun Jun 7 01:58:30 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 6 Jun 2020 21:58:30 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 405 - Failure! Message-ID: <701780729.1615.1591495111736.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 405 Status: Failure Timestamp: 20200607T014748Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T013413Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200607T013413Z DOCKER_BUILD_ID: jenkins-master-flock-20200607T013413Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T013413Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200607T013413Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Sun Jun 7 01:58:33 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 6 Jun 2020 21:58:33 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 136 - Still Failing! In-Reply-To: <1803572455.1609.1591408694767.JavaMail.javamailuser@localhost> References: <1803572455.1609.1591408694767.JavaMail.javamailuser@localhost> Message-ID: <169013534.1618.1591495113943.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 136 Status: Still Failing Timestamp: 20200607T013413Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T013413Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Sun Jun 7 23:28:12 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 7 Jun 2020 19:28:12 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 408 - Failure! Message-ID: <1355935677.1622.1591572492731.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 408 Status: Failure Timestamp: 20200607T231743Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T230408Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200607T230408Z DOCKER_BUILD_ID: jenkins-master-flock-20200607T230408Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T230408Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200607T230408Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Sun Jun 7 23:28:14 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 7 Jun 2020 19:28:14 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 137 - Still Failing! In-Reply-To: <1645969591.1616.1591495112325.JavaMail.javamailuser@localhost> References: <1645969591.1616.1591495112325.JavaMail.javamailuser@localhost> Message-ID: <2070081043.1625.1591572494885.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 137 Status: Still Failing Timestamp: 20200607T230408Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T230408Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From haochuan.z.chen at intel.com Mon Jun 8 02:21:39 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 8 Jun 2020 02:21:39 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: Message-ID: Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From agung at btech.id Mon Jun 8 02:31:12 2020 From: agung at btech.id (Rahmat Agung) Date: Mon, 8 Jun 2020 09:31:12 +0700 Subject: [Starlingx-discuss] ERROR when deploy stx-monitor. Message-ID: I try to deploy stx-monitor on 3 nworker nodes with label like this: ``` worker-3 Ready 2d18h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-client=enabled,elastic-controller=enabled,elastic-data=enabled,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-3,kubernetes.io/os=linux worker-4 Ready 2d18h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-client=enabled,elastic-controller=enabled,elastic-data=enabled,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-4,kubernetes.io/os=linux worker-5 Ready 2d16h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-5,kubernetes.io/os=linux ``` When I check logs: ``` us: <_Rendezvous of RPC that terminated with: status = StatusCode.UNKNOWN details = "release mon-kibana failed: timed out waiting for the condition" debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" > 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller Traceback (most recent call last): 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 473, in install_release 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller metadata=self.metadata) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller details = "release mon-kibana failed: timed out waiting for the condition" 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller > 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller 2020-06-07 14:07:21.199 7963 DEBUG armada.handlers.tiller [-] [chart=kibana]: Helm getting release status for release=mon-kibana, version=0 get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539 2020-06-07 14:07:21.402 7963 DEBUG armada.handlers.tiller [-] [chart=kibana]: GetReleaseStatus= name: "mon-kibana" info { status { code: FAILED } first_deployed { seconds: 1591538240 nanos: 977775758 } last_deployed { seconds: 1591538240 nanos: 977775758 } Description: "Release \"mon-kibana\" failed: timed out waiting for the condition" } namespace: "monitor" get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:547 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada [-] Chart deploy [kibana] failed: armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: mon-kibana - Tiller Message: b'Release "mon-kibana" failed: timed out waiting for the condition' 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada Traceback (most recent call last): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 473, in install_release 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada metadata=self.metadata) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada return _end_unary_response_blocking(state, call, False, None) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada raise _Rendezvous(state, None, None, deadline) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada status = StatusCode.UNKNOWN 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada details = "release mon-kibana failed: timed out waiting for the condition" 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada > 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada During handling of the above exception, another exception occurred: 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada Traceback (most recent call last): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 225, in handle_result 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada result = get_result() 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 236, in 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada if (handle_result(chart, lambda: deploy_chart(chart))): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 214, in deploy_chart 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada chart, cg_test_all_charts, prefix, known_releases) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 239, in execute 2020-06-07 14:07[402248.574350] serial8250: too much work for irq4 :21.404 7963 ERROR armada.handlers.armada timeout=timer) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 486, in install_release 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada raise ex.ReleaseException(release, status, 'Install') 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: mon-kibana - Tiller Message: b'Release "mon-kibana" failed: timed out waiting for the condition' 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.406 7963 ERROR armada.handlers.armada [-] Chart deploy(s) failed: ['kibana'] 2020-06-07 14:07:21.478 7963 INFO armada.handlers.lock [-] Releasing lock 2020-06-07 14:07:21.486 7963 ERROR armada.cli [-] Caught internal exception: armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kibana'] 2020-06-07 14:07:21.486 7963 ERROR armada.cli Traceback (most recent call last): 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke 2020-06-07 14:07:21.486 7963 ERROR armada.cli self.invoke() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 213, in invoke 2020-06-07 14:07:21.486 7963 ERROR armada.cli resp = self.handle(documents, tiller) 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper 2020-06-07 14:07:21.486 7963 ERROR armada.cli return future.result() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2020-06-07 14:07:21.486 7963 ERROR armada.cli return self.__get_result() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2020-06-07 14:07:21.486 7963 ERROR armada.cli raise self._exception 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2020-06-07 14:07:21.486 7963 ERROR armada.cli result = self.fn(*self.args, **self.kwargs) 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 256, in handle 2020-06-07 14:07:21.486 7963 ERROR armada.cli return armada.sync() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 252, in sync 2020-06-07 14:07:21.486 7963 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2020-06-07 14:07:21.486 7963 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kibana'] 2020-06-07 14:07:21.486 7963 ERROR armada.cli ``` What mean the error above? I just want to know, is stx-monitor stable or still experimental? Because I could not found documentation about it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Mon Jun 8 08:53:52 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 8 Jun 2020 08:53:52 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank, It is not easy to figure out whether/how/when OpenStack-helm-info upstream introduce this issue and then fix it. I also could not find any fix in LP[1], which just mentioned that this intermittent issue not hit us after some changes in related field. Anyhow, below 2 patches should fix potential bug and I could not see the same error log again in our ussuri upgrade EB. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death Since we have passed fully test, we'd better push to merge ussuri upgrade/openstack-helm rebasing patches soon. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月5日 22:32 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This looks promising. Your theory is that the 2 openstack-helm-infra patches will fix the mariadb recovery issues. These 2 patches were merged in the openstack-helm-infra project in January and February of 2020. What would be good to know is what broke mariadb recovery between April of 2019 when Chris Friesen finished up his story [1] and our current loads today. The most likely explanation is the upversion of Train or the upversion to openstack-helm-infra done in November 2019 introduced the mariadb recovery issues. And then the openstack-helm folks found and fixed the issue earlier in 2020. If we had more time the preferred approach would be to merge just the openstack-helm-infra changes first to prove they address mariadb recovery and then in a separate commit merge Ussuri. But since you have validated that mariadb recovers with your Ussuri branch and this branch has these openstack-helm commits, I support letting Ussuri merge into stx.4.0. Frank [1] https://storyboard.openstack.org/#!/story/2004712 -----Original Message----- From: Liu, ZhipengS Sent: Friday, June 05, 2020 2:36 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. My test step is: 1) ssh to standby controller and sudo reboot -f for it. 2) sudo reboot -f for activated controller All pods can resume after a while. However, I could reproduce this issue with DB 20200516T080009Z. From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. It includes below 2 patches which fixed this stability issue. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death [1] https://bugs.launchpad.net/starlingx/+bug/1881899 [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:35 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Mon Jun 8 09:03:01 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 8 Jun 2020 09:03:01 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: Hi Scott, After discussed with Chengde, In order not to introduce these packages version conflict in local mirror, we'd better revert the commit 44a8a1d798dc98d4f6ffcd200237c94585b31c40 with https://review.opendev.org/#/c/734035/ Please help to update cengn build script with below 2 additional repos. build-stx-base.sh --repo local-stx-build,... \ --repo stx-distro,... \ --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月6日 9:30 To: 'Scott Little' ; 'starlingx-discuss at lists.starlingx.io' ; 'YuChengDe' Subject: RE: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! Hi Scott, We have updated the patch below as you see and fixed your comment as well, thanks! https://review.opendev.org/#/c/733426/ It has been verified by Chengde! Many thanks!! After this patch get merged, could you do me a favor to cherry pick below patches to check if OpenStack images build can be triggered successfully by cengn script? (glance, cinder, nova, horizon) https://review.opendev.org/#/c/712880/ Modify build-tools and stable-wheels for Ussuri upgrading https://review.opendev.org/#/c/712862/ Update openstack docker images for stable/ussuri You might need add below repo in your build script. --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ Thanks a lot! Zhipeng From: Liu, ZhipengS Sent: 2020年6月4日 22:36 To: Scott Little >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! Hi Scott, For our OpenStack upgrade case, we may have one more option that is not adding this ceph 13.2.10 repo to local build repo folder. Instead, we add this ceph repo as a parameter when we run build-stx-base.sh. Then this repo only used by OpenStack build. We will verify it tomorrow. Thanks! Zhipeng From: Scott Little > Sent: 2020年6月4日 22:19 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! I see https://review.opendev.org/#/c/733426/9 has been posted With this update, layered builds should pass, and would look like this ... * Flock and iso builds will use 13.2.2. * All container builds uses 13.2.10. * Do we want 13.2.10 in ALL containers? * Any ceph dependent rpms from distro/flock builds that make it into a container (if any), will have been compiled against 13.2.2, but will run against 13.2.10. I'm more comfortable with a increment to the patch level than a decrement. I think we can live with this until we can move to 13.2.10 universally. Monolithic will continue to build, but will remain confused ... All lst files, including container layer lsts, are downloaded before any package is built. Most if not all packages that depend on ceph will build against 13.2.10 as mock/yum does not understand the 'prefer local'. build-iso will use 'prefer local' and ship with 13.2.2. The implications of which is unclear. One hopes that the interface is stable when the version diff is only at the patch level, but I never like to see shipped version LOWER than the complied against version. On 2020-06-03 6:08 p.m., Saul Wold wrote: On 6/3/20 2:01 PM, Scott Little wrote: No I don't think that would work. We can't have two versions of the same package competing for dominance within the mock build environments. i.e. on time pkg X builds against 13.2.2, the next time against 13.2.10. The outcome dependent on the vagaries of job scheduling, build speeds, and any other number of factors. If you compile against 13.2.10, will you run ok vs 13.2.2. I wouldn't want to bet on it. The build layering solution might be to throw it in it's own layer. Until we are 100% committed to build layering, we need to converge on ONE version of ceph. Ok, so one option is to move to Ceph 13.2.10 or drop the existing package list update that brings in the python3 and related Ceph packages. Do we need to at least revert that commit in-order to get the build working again? We might need to spend a few minutes to hash this out tomorrow morning at the PTG. Sau! Scott On 2020-06-03 10:52 a.m., Saul Wold wrote: On 6/3/20 1:47 AM, Liu, ZhipengS wrote: Hi Scott, For question #1, When we built openstack ussuri image which is python3 only. It needs python3-rbd and related dependency, so we add librados2-13.2.10 and related packages. For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. Shouldn’t we let the build choose local build first? Following up on this we need to be careful about which we choose, as I said in the other email is this a one-off issue or something that we see more of. So maybe an audit tool would help. Another option is moving these packages to container layer, add rpms_centos.lst in config/centos/flock/? I understand this option better after chatting with Zhipeng, I think this might be the best option adding the Updated Ceph / RBD related packages to the container list which will be used for the Usurri container builds but not by the platform OS. This would mean that the containers would have Ceph 13.2.10 related packages and the platform OS would be 13.2.2. Would that cause problems or stability issues? Sau! Thanks! Zhipeng *From:*Scott Little *Sent:* 2020年6月3日15:57 *To:* starlingx-discuss at lists.starlingx.io *Subject:* Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version. Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build. It doesn't build it itself. The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm. It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm. It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this. I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. If yes, do we still need whatever modifications were applied to librados2-13.2.2? Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst? If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Mon Jun 8 09:59:46 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Mon, 8 Jun 2020 11:59:46 +0200 Subject: [Starlingx-discuss] Subcloud on a Virtual Machine In-Reply-To: References: Message-ID: Hi all. Any thoughts on this? Also has anyone ever tried this solution with StarlingX on Virtual Machine at all? Cheers On Wed, Jun 3, 2020 at 9:05 PM Alfredo De Luca wrote: > Hi all. > For testing purposes we are trying to install a subcloud on a VM > (Openstack to be precise) but we get a couple of errors as below. Booting > from an ISO (STX 3.0) we get this > > 1. ERROR: Specified installation (sda) or boot (sda) device is invalid. > then I supposed the ISO is looking for a device *sda* .. so we fixed that > but then another issue occurred and the error now is > 2. Disk "" given in clearpart command does not exist. > Now I wonder if it is possible to install that on top of a VM and also > what could it the fix for the second error. > Any idea/clue? > > Cheers > > > -- > */Alfredo* > > -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Jun 8 14:02:49 2020 From: scott.little at windriver.com (Scott Little) Date: Mon, 8 Jun 2020 10:02:49 -0400 Subject: [Starlingx-discuss] [Build] A new way to test you package's dependencies Message-ID: We now have a new command to test a package for its build dependencies.  It should be used when ever you upversion a package, or make significant changes to it's build scripts (spec files, make files, auto-config ...)    build-pkgs --dep-test It should be used when ever you upversion a package, or make significant changes to it's build scripts (spec files, make files, auto-config ...) Note: This should only be used following a full build-pkgs.  i.e. You need to be sure that an dependencies that we also build are available. One might think that if your package passes a full build (build-pkgs), that you are safe, but this is NOT the case.  When doing a full build, we don't wipe the build environment clean between packages.  This means that the environment might (or might not) have a tool or library present that your package needs, but fails to list as a BuildRequires in its spec file.  It will build successfully one time, but night not build the next.  It all depends on what packages were scheduled to build in the same environment before the package of interest. The --dep-test option rebuilds just one package in a clean environment, providing an effective test of the BuildRequires for your package. -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Mon Jun 8 15:46:22 2020 From: helena at openstack.org (helena at openstack.org) Date: Mon, 8 Jun 2020 11:46:22 -0400 (EDT) Subject: [Starlingx-discuss] StarlingX Glossary Message-ID: <1591631182.557525360@apps.rackspace.com> Greetings StarlingX Community! We are on a mission to create a glossary of StarlingX related terms and want your help! As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Similarly, having a glossary of terms has proven to be a good SEO tactic to gain more web traffic; by creating this glossary, we are hoping to have greater visibility to potential contributors, users, and supporting organizations. This is where you come in! We need your help to define the terms that we can use to educate future contributors Below is an etherpad link. We ask that you add, edit, review, and collaborate on this etherpad to help us make the StarlingX community more accessible and understandable. If you think of more terms to add to the list, please do! As always, feel free to reach out with any questions. Cheers, Helena Spease StarlingX: [ https://etherpad.opendev.org/p/StarlingX_Glossary ]( https://etherpad.opendev.org/p/StarlingX_Glossary ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Jun 8 16:00:05 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 8 Jun 2020 16:00:05 +0000 Subject: [Starlingx-discuss] StarlingX Glossary In-Reply-To: <1591631182.557525360@apps.rackspace.com> References: <1591631182.557525360@apps.rackspace.com> Message-ID: Hi Helena. I think we already have a good start on this in the StarlingX documentation [1]. I suggest we focus on improving that glossary instead of starting a new one. Brucej [1] https://docs.starlingx.io/introduction/terms.html From: helena at openstack.org Sent: Monday, June 8, 2020 8:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Glossary Greetings StarlingX Community! We are on a mission to create a glossary of StarlingX related terms and want your help! As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Similarly, having a glossary of terms has proven to be a good SEO tactic to gain more web traffic; by creating this glossary, we are hoping to have greater visibility to potential contributors, users, and supporting organizations. This is where you come in! We need your help to define the terms that we can use to educate future contributors Below is an etherpad link. We ask that you add, edit, review, and collaborate on this etherpad to help us make the StarlingX community more accessible and understandable. If you think of more terms to add to the list, please do! As always, feel free to reach out with any questions. Cheers, Helena Spease StarlingX: https://etherpad.opendev.org/p/StarlingX_Glossary -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Jun 8 16:49:44 2020 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 8 Jun 2020 16:49:44 +0000 Subject: [Starlingx-discuss] StarlingX R4.0 In-Reply-To: <32416eee-3bf5-56eb-c66b-13f103c67769@kunet.com> References: <32416eee-3bf5-56eb-c66b-13f103c67769@kunet.com> Message-ID: Hello Ammar, Welcome to the StarlingX project! This link should be your starting point: https://www.starlingx.io/ From there, you can access the various community communication channels: https://www.starlingx.io/community/ as well as links to the software https://www.starlingx.io/software/ The latest StarlingX official release is stx.3.0: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0/ The community is working actively on the next release: stx.4.0 which will be released in mid-July. I recommend that you use the starlingx mailing list (cc'd) for any further inquiries. Best Regards, Ghada -----Original Message----- From: Ammar T. Al-Sayegh [mailto:ammar at kunet.com] Sent: Monday, June 08, 2020 8:44 AM To: Khalil, Ghada Subject: StarlingX R4.0 Dear Ghada, I am planning to adopt StarlingX for building an edge cloud for my business. Would you be able to kindly give me access to the latest release of the system? Thank you very much. Dr. Ammar T. Al-Sayegh General Manager, KUNet From scott.little at windriver.com Mon Jun 8 19:06:31 2020 From: scott.little at windriver.com (Scott Little) Date: Mon, 8 Jun 2020 15:06:31 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 137 - Still Failing! In-Reply-To: <2070081043.1625.1591572494885.JavaMail.javamailuser@localhost> References: <1645969591.1616.1591495112325.JavaMail.javamailuser@localhost> <2070081043.1625.1591572494885.JavaMail.javamailuser@localhost> Message-ID: The offending update has been backed out. Flock layer build #138 was a success. Scott On 2020-06-07 7:28 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_layer_flock_master_master > Build #: 137 > Status: Still Failing > Timestamp: 20200607T230408Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T230408Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Mon Jun 8 19:17:32 2020 From: allison at openstack.org (Allison Price) Date: Mon, 8 Jun 2020 14:17:32 -0500 Subject: [Starlingx-discuss] OSF Community Meeting - June 25 & 26 Message-ID: <50093F4B-FD92-4CA1-A03F-55DE0A8F2C3D@openstack.org> Hi everyone, On June 25 (1300 UTC) and June 26 (0200 UTC) , we will be holding the quarterly OSF community [1] that will cover project updates from all OSF-supported projects and events. The StarlingX community is encouraged to prepare a slide and present a 3-5 minute update on the project and community’s progress. The update should cover updates that have occurred since the last community meeting on April 2. If you would like to volunteer to present the StarlingX update for one meeting (or both!) please sign up here [1]. We are aiming to finalize the content by Friday, June 19. If you missed the Q1 community meeting, you can see how the upcoming meeting will be structured in this recording [2] and this slide deck [3]. If you have any questions, please let me know. Thanks! Allison [1] https://etherpad.opendev.org/p/OSF_Community_Meeting_Q2 [2] https://zoom.us/rec/share/7vVXdIvopzxIYbPztF7SVpAKXYnbX6a82iMaqfZfmEl1b0Fqb6j3Zh47qPSV_ar2 [3] https://docs.google.com/presentation/d/1l05skj_BCfF8fgYWu4n0b1rQmbNhHp8sMeYcb-v-rdA/edit#slide=id.g82b6d187d5_0_525 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Jun 9 02:21:27 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 9 Jun 2020 02:21:27 +0000 Subject: [Starlingx-discuss] No StarlingX Containerization meeting --> offline update instead Message-ID: There will not be a meeting on Tuesday June 9. Instead status of the stx.4.0 containerization features has been updated on the etherpad [1]. If anyone else has any updates or topics for discussion please add an update to the etherpad. Frank Containers PL [1] https://etherpad.opendev.org/p/stx-containerization -------------- next part -------------- An HTML attachment was scrubbed... URL: From maryx.camp at intel.com Tue Jun 9 02:24:39 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Tue, 9 Jun 2020 02:24:39 +0000 Subject: [Starlingx-discuss] StarlingX Glossary In-Reply-To: References: <1591631182.557525360@apps.rackspace.com> Message-ID: Hi Helena, I am the Project Lead for StarlingX docs and I am happy to help with updates to the Terms list that Brucej linked below. Please ping me if you have questions. thanks, Mary Camp PTIGlobal Technical Writer | maryx.camp at intel.com From: Jones, Bruce E Sent: Monday, June 8, 2020 12:00 PM To: helena at openstack.org; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Glossary Hi Helena. I think we already have a good start on this in the StarlingX documentation [1]. I suggest we focus on improving that glossary instead of starting a new one. Brucej [1] https://docs.starlingx.io/introduction/terms.html From: helena at openstack.org > Sent: Monday, June 8, 2020 8:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Glossary Greetings StarlingX Community! We are on a mission to create a glossary of StarlingX related terms and want your help! As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Similarly, having a glossary of terms has proven to be a good SEO tactic to gain more web traffic; by creating this glossary, we are hoping to have greater visibility to potential contributors, users, and supporting organizations. This is where you come in! We need your help to define the terms that we can use to educate future contributors Below is an etherpad link. We ask that you add, edit, review, and collaborate on this etherpad to help us make the StarlingX community more accessible and understandable. If you think of more terms to add to the list, please do! As always, feel free to reach out with any questions. Cheers, Helena Spease StarlingX: https://etherpad.opendev.org/p/StarlingX_Glossary -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jun 9 08:38:34 2020 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 9 Jun 2020 08:38:34 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 6/10/2020 Message-ID: Hi All: Agenda for 6/10 meeting: - PTG update: https://etherpad.opendev.org/p/stx-virtual-PTG-June - ceph containerization: - centos8 and python3 - bugs: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other - open: If have any other topic, feel free to add to https://etherpad.openstack.org/p/stx-distro-other Thanks. BR Austin Sun. From zhipengs.liu at intel.com Tue Jun 9 08:39:30 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 9 Jun 2020 08:39:30 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi all, So far, all block issues and concerns have been addressed. Since we have passed all sanity test, and Ussuri OpenStack has been officially released last month, there should be no more reason to block these patches merge. Next step: Let's push to get ussuri upgrade/openstack-helm rebasing patches merged. We need great help from core guys! https://review.opendev.org/#/q/topic:for_ussuri+(status:open) # Below 6 patches are for OpenStack-helm/infra rebase. (we set first patch with workflow-1 and add depends-on for other patches as we need to merge them together.) Upgrade openstack-helm-infra zhipeng liu starlingx/openstack-armada-app workflow-1 Add mariadb database config override to support ipv6 zhipeng liu starlingx/openstack-armada-app Fix render error in cinder during openstack-helm rebase zhipeng liu starlingx/openstack-armada-app Update download list for openstack-helm upgrade zhipeng liu starlingx/openstack-armada-app Update manifest.yaml file for openstack-helm upgrade. zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm zhipeng liu starlingx/openstack-armada-app # Below 3 patches is for OpenStack upgrade. Update manifest.yaml file for ussuri openstack YU CHENGDE starlingx/openstack-armada-app Modify build-tools and stable-wheels for Ussuri upgrading YU CHENGDE starlingx/root Upgrade openstack docker images for stable/ussuri YU CHENGDE starlingx/upstream After removing required python3 dependent packages from local, we can build out base image and OpenStack service images successfully with below command. =============================================================================== @Scott, please help to update cengn build script with below 2 additional repos and help to trigger image build build-stx-base.sh --repo local-stx-build,... \ --repo stx-distro,... \ --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ Thanks a lot! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月8日 16:54 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, It is not easy to figure out whether/how/when OpenStack-helm-info upstream introduce this issue and then fix it. I also could not find any fix in LP[1], which just mentioned that this intermittent issue not hit us after some changes in related field. Anyhow, below 2 patches should fix potential bug and I could not see the same error log again in our ussuri upgrade EB. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death Since we have passed fully test, we'd better push to merge ussuri upgrade/openstack-helm rebasing patches soon. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月5日 22:32 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This looks promising. Your theory is that the 2 openstack-helm-infra patches will fix the mariadb recovery issues. These 2 patches were merged in the openstack-helm-infra project in January and February of 2020. What would be good to know is what broke mariadb recovery between April of 2019 when Chris Friesen finished up his story [1] and our current loads today. The most likely explanation is the upversion of Train or the upversion to openstack-helm-infra done in November 2019 introduced the mariadb recovery issues. And then the openstack-helm folks found and fixed the issue earlier in 2020. If we had more time the preferred approach would be to merge just the openstack-helm-infra changes first to prove they address mariadb recovery and then in a separate commit merge Ussuri. But since you have validated that mariadb recovers with your Ussuri branch and this branch has these openstack-helm commits, I support letting Ussuri merge into stx.4.0. Frank [1] https://storyboard.openstack.org/#!/story/2004712 -----Original Message----- From: Liu, ZhipengS Sent: Friday, June 05, 2020 2:36 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. My test step is: 1) ssh to standby controller and sudo reboot -f for it. 2) sudo reboot -f for activated controller All pods can resume after a while. However, I could reproduce this issue with DB 20200516T080009Z. From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. It includes below 2 patches which fixed this stability issue. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death [1] https://bugs.launchpad.net/starlingx/+bug/1881899 [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:35 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Dan.Voiculeasa at windriver.com Tue Jun 9 09:54:22 2020 From: Dan.Voiculeasa at windriver.com (Voiculeasa, Dan) Date: Tue, 9 Jun 2020 09:54:22 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: , Message-ID: Hello Martin, I didn't encounter that issue when testing, but also I didn't test recently without ceph backend. Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph? Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z Sent: Monday, June 8, 2020 5:21 AM To: Voiculeasa, Dan ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory\ncp: cannot stat ‘>’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory", "cp: cannot stat ‘>’: No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Tue Jun 9 13:04:38 2020 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 9 Jun 2020 06:04:38 -0700 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: Message-ID: Frank, Scott, Davelet: Are there cycles available on Cengn (and people resources) to do a Cengn build with the Ussuri patch set applied? I know this is different than a branch build. I think we have done this kind of thing in the past. This might help to make sure we don't have any more Cengn build issues and could give the Test team a sanity spin with a Ussuri/Cengn build. Note there is a comment for Scott/Davelet at the bottom of Zhipeng's email. Thanks Sau! On 6/9/20 1:39 AM, Liu, ZhipengS wrote: > Hi all, > > So far, all block issues and concerns have been addressed. > Since we have passed all sanity test, and Ussuri OpenStack has been officially released last month, > there should be no more reason to block these patches merge. > > Next step: > Let's push to get ussuri upgrade/openstack-helm rebasing patches merged. We need great help from core guys! > https://review.opendev.org/#/q/topic:for_ussuri+(status:open) > > # Below 6 patches are for OpenStack-helm/infra rebase. (we set first patch with workflow-1 and add depends-on for other patches as we need to merge them together.) > Upgrade openstack-helm-infra zhipeng liu starlingx/openstack-armada-app workflow-1 > Add mariadb database config override to support ipv6 zhipeng liu starlingx/openstack-armada-app > Fix render error in cinder during openstack-helm rebase zhipeng liu starlingx/openstack-armada-app > Update download list for openstack-helm upgrade zhipeng liu starlingx/openstack-armada-app > Update manifest.yaml file for openstack-helm upgrade. zhipeng liu starlingx/openstack-armada-app > Upgrade openstack-helm zhipeng liu starlingx/openstack-armada-app > > # Below 3 patches is for OpenStack upgrade. > Update manifest.yaml file for ussuri openstack YU CHENGDE starlingx/openstack-armada-app > Modify build-tools and stable-wheels for Ussuri upgrading YU CHENGDE starlingx/root > Upgrade openstack docker images for stable/ussuri YU CHENGDE starlingx/upstream > > > After removing required python3 dependent packages from local, we can build out base image and OpenStack service images successfully with below command. > =============================================================================== > @Scott, please help to update cengn build script with below 2 additional repos and help to trigger image build > build-stx-base.sh > --repo local-stx-build,... \ > --repo stx-distro,... \ > --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ > --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > Thanks a lot! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月8日 16:54 > To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Friesen, Chris > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > It is not easy to figure out whether/how/when OpenStack-helm-info upstream introduce this issue and then fix it. > I also could not find any fix in LP[1], which just mentioned that this intermittent issue not hit us after some changes in related field. > > Anyhow, below 2 patches should fix potential bug and I could not see the same error log again in our ussuri upgrade EB. > https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death > > Since we have passed fully test, we'd better push to merge ussuri upgrade/openstack-helm rebasing patches soon. > https://review.opendev.org/#/q/topic:for_ussuri+(status:open) > > [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ > > Thanks! > Zhipeng > -----Original Message----- > From: Miller, Frank > Sent: 2020年6月5日 22:32 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Friesen, Chris > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > This looks promising. Your theory is that the 2 openstack-helm-infra patches will fix the mariadb recovery issues. These 2 patches were merged in the openstack-helm-infra project in January and February of 2020. What would be good to know is what broke mariadb recovery between April of 2019 when Chris Friesen finished up his story [1] and our current loads today. The most likely explanation is the upversion of Train or the upversion to openstack-helm-infra done in November 2019 introduced the mariadb recovery issues. And then the openstack-helm folks found and fixed the issue earlier in 2020. > > If we had more time the preferred approach would be to merge just the openstack-helm-infra changes first to prove they address mariadb recovery and then in a separate commit merge Ussuri. But since you have validated that mariadb recovers with your Ussuri branch and this branch has these openstack-helm commits, I support letting Ussuri merge into stx.4.0. > > Frank > [1] https://storyboard.openstack.org/#!/story/2004712 > > -----Original Message----- > From: Liu, ZhipengS > Sent: Friday, June 05, 2020 2:36 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Friesen, Chris > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. > My test step is: > 1) ssh to standby controller and sudo reboot -f for it. > 2) sudo reboot -f for activated controller All pods can resume after a while. > > However, I could reproduce this issue with DB 20200516T080009Z. > From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. > > In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. > It includes below 2 patches which fixed this stability issue. > https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death > > [1] https://bugs.launchpad.net/starlingx/+bug/1881899 > [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年6月3日 22:35 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > This is not a new requirement. Users expect the software to recover when resets occur. > > As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Wednesday, June 03, 2020 10:28 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Frank, > > Have we pass this case before? Is it a new requirement? > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年6月3日 22:12 > To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 > > Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. > > Frank > > -----Original Message----- > From: Miller, Frank > Sent: Tuesday, June 02, 2020 10:38 PM > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > We used a build from May 28. > > As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Tuesday, June 02, 2020 10:04 PM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > Thanks for your quick update! > Which build are you using to test this case? > Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. > BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ > > Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? > I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年6月3日 8:55 > To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. > > Frank > > -----Original Message----- > From: Miller, Frank > Sent: Tuesday, June 02, 2020 12:25 PM > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. > > In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 > > The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. > > But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Tuesday, June 02, 2020 11:47 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > For LP https://bugs.launchpad.net/starlingx/+bug/1881454 > Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. > We should fix this regression ASAP! > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月2日 16:48 > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank and all, > > Update for issue 2. > I raised a new LP to track it. > https://bugs.launchpad.net/starlingx/+bug/1881722 > Below is the time statistics. It seems reasonable. No obvious issue found. > 1) 3~4min for host restart and get ready. > 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) > 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) > 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? > > For LP https://bugs.launchpad.net/starlingx/+bug/1881454 > Unable to unlock controller after swact and lock w/ openstack applied > And https://bugs.launchpad.net/starlingx/+bug/1881711 > system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. > Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月1日 16:20 > To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > I also tested the issue 2 with latest daily build on duplex setup. > The conclusion is that the issue is there all the time. > This issue might not be fixed soon, but should not block OpenStack upgrade, right? > > For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. > https://review.opendev.org/#/q/topic:for_ussuri+(status:open) > Your review and comments are welcome! > > As for issue 2, some detail info FYI. > It also needs to wait for around 10 min before all pods are ready again after reboot for master build. > It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. > neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) > openvswitch-db-8fxkw > Related key logs below. > Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition > Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition > Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition > Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition > Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) > Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) > > Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? > Your comment is appreciated! > > Thanks! > Zhipeng > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年5月29日 9:42 > To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > Glad to see your quick reply!! > For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? > > For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? > > According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. > > [1] https://bugs.launchpad.net/starlingx/+bug/1855474 > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年5月29日 1:07 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Thanks Zhipeng. > > Good to see progress on IPv6. > Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Thursday, May 28, 2020 5:06 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > Nicolae already added test case description. Thanks Nicolae! > > I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. > No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. > > For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. > https://review.opendev.org/#/c/731461/ > https://review.opendev.org/#/c/731470/ > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年5月27日 22:43 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? > > For the controller reset testcases I'd like to see the test result for the following: > Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: > - Lock/unlock of standby controller > - reset (ie: reboot -f) of the standby controller > - reset (ie: reboot -f) of the active controller > - reapply of stx-openstack after the above scenarios > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Wednesday, May 27, 2020 9:15 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > We have done below tests. > 1) Sanity tests by Nicolae. > AIO - Simplex > Setup 04 TCs [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 49 TCs [PASS] > Sanity Platform 07 TCs [PASS] > > TOTAL: [ 61 TCs ] > > AIO - Duplex > Setup 04 TCs [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 52 TCs [PASS] > Sanity Platform 07 TCs [PASS] > > TOTAL: [ 64 TCs ] > > Standard - Local Storage (2+2) > Setup 04 TCs [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 52 TCs [PASS] > Sanity Platform 08 TCs [PASS] > > TOTAL: [ 65 TCs ] > > Standard External - Dedicated Storage (2+2+2) > Setup 04 TCs [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 52 TCs [PASS] > Sanity Platform 09 TCs [PASS] > > TOTAL: [ 66 TCs ] > > 2) NFV scenario test by me > on duplex/multi standard virtual setup > duplex bare metal setup > ===== Setup ================================================================================================================================= > 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] > 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] > 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] > 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] > 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] > 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] > 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] > 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] > 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] > 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] > 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] > 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] > 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] > 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] > 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] > 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] > 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] > 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] > 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] > 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] > 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] > 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] > 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] > 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= > ===== Test Iteration 0 (single-execution) =================================================================================================== > 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) > 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) > 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) > 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) > 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) > 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) > 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) > 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) > 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) > 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) > 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) > 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) > 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) > 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) > 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) > 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) > Total-Tests: 16 Execution-Time: 0:16:11.676 > > 3) Another 2 test > a) Using IPv6 > It can pass with workaround now. I need one more fix for it. > In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below > config_override: | > [mysqld] > bind_address=:: > However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" > I tried many methods, but could not remove the first line in 20-override.cnf > mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf > |- > [mysqld] > bind_address=:: > I can only add it in manifest.yaml as a static override like below. > values: > conf: > database: > config_override: | > [mysqld] > bind_address=:: > > b) Reset of controllers and check status of OpenStack while a controller is rebooting. > I have tested it and pass on simplex. > For duplex, I have a setup issue in my side. > @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! > > Zhipeng > > > > -----Original Message----- > From: Miller, Frank > Sent: 2020年5月26日 21:13 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > Can you publish the list of tests that have been run for openstack? > > Also has openstack been tested for the following scenarios: > 1) Using IPv6 > 2) Reset of controllers and check status of openstack while a controller is rebooting? > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Monday, May 25, 2020 3:14 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi all, > > We have passed all sanity test on all setup. Thanks Nicolae!! > We also built out OpenStack service images from layered build environment. > > Please help to review and push below patches to be merged, thanks! > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) > > BRs > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年5月14日 16:49 > To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi all, > > Call for patch review again! > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年5月9日 8:38 > To: Saul Wold ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Agree! > > -----Original Message----- > From: Saul Wold > Sent: 2020年5月9日 0:29 > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. > > Full Stop! > > Sau! > > > On 5/8/20 9:05 AM, Miller, Frank wrote: >> Until we can get sanity passing for several days in a row I strongly >> suggest we do not allow any further changes into the load related to >> OpenStack.  Folks can continue with reviews but let’s hold off >> allowing merges related to a new OpenStack version. >> >> Frank >> >> *From:*Liu, ZhipengS >> *Sent:* Friday, May 08, 2020 11:59 AM >> *To:* starlingx-discuss >> *Cc:* YU CHENGDE ; Penney, Don >> >> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> Please help to review OpenStack Ussuri upgrade patches. >> >> Our target is to get all below patches merged by end of next week. >> >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >> :merged) >> >> During OpenStack upgrade for StarlingX, we have to move python2.7 to >> python3.6 for OpenStack services as ussuri release only support python3. >> >> We also rebased openstack-helm/helm-infra to latest version. >> >> Engineering build test status. >> >> 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >> 2. nfv_scenario_tests PASS on simplex bare metal setup. >> 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. >> >> Thanks! >> >> Zhipeng >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From helena at openstack.org Tue Jun 9 16:53:50 2020 From: helena at openstack.org (helena at openstack.org) Date: Tue, 9 Jun 2020 12:53:50 -0400 (EDT) Subject: [Starlingx-discuss] StarlingX Glossary In-Reply-To: References: <1591631182.557525360@apps.rackspace.com> Message-ID: <1591721630.566521471@apps.rackspace.com> Hi Bruce, Thank you for sending me the glossary! Yes, we will be using the etherpad to get community feedback and then editing the present glossary accordingly. Cheers, Helena -----Original Message----- From: "Jones, Bruce E" Sent: Monday, June 8, 2020 12:00pm To: "helena at openstack.org" , "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] StarlingX Glossary Hi Helena. I think we already have a good start on this in the StarlingX documentation [1]. I suggest we focus on improving that glossary instead of starting a new one. Brucej [1] [ https://docs.starlingx.io/introduction/terms.html ]( https://docs.starlingx.io/introduction/terms.html ) From: helena at openstack.org Sent: Monday, June 8, 2020 8:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Glossary Greetings StarlingX Community! We are on a mission to create a glossary of StarlingX related terms and want your help! As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Similarly, having a glossary of terms has proven to be a good SEO tactic to gain more web traffic; by creating this glossary, we are hoping to have greater visibility to potential contributors, users, and supporting organizations. This is where you come in! We need your help to define the terms that we can use to educate future contributors Below is an etherpad link. We ask that you add, edit, review, and collaborate on this etherpad to help us make the StarlingX community more accessible and understandable. If you think of more terms to add to the list, please do! As always, feel free to reach out with any questions. Cheers, Helena Spease StarlingX: [ https://etherpad.opendev.org/p/StarlingX_Glossary ]( https://etherpad.opendev.org/p/StarlingX_Glossary ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jun 9 17:54:35 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 9 Jun 2020 17:54:35 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 10, 2020) Message-ID: Hi all, reminder of tomorrow's TSC/Community call. Please feel free to add items to the agenda [0] for the Community call beforehand. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200610T1400 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jun 9 18:36:11 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 9 Jun 2020 20:36:11 +0200 Subject: [Starlingx-discuss] StarlingX confirmation review this Thursday Message-ID: <14E9D19B-7699-4BA2-B169-DFCB7E5A75B1@gmail.com> Hi StarlingX Community, It is a friendly reminder that the OSF Board meeting where we will have the project confirmation review and discussion with the OpenStack Foundation Board of Directors will take place this Thursday (June 11). The StarlingX slot is currently scheduled for 7:45am US Pacific Time. You can find the dial in and meeting details on this wiki: https://wiki.openstack.org/wiki/Governance/Foundation/11June2020BoardMeeting Please let me know if you have any questions. Thanks, Ildikó From ildiko.vancsa at gmail.com Tue Jun 9 19:41:50 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 9 Jun 2020 21:41:50 +0200 Subject: [Starlingx-discuss] PTG recordings Message-ID: <215864BD-8D09-439D-92D5-7F5F87EA84E1@gmail.com> Hi, Here are the links to last week’s PTG recordings with the corresponding passwords: * https://zoom.us/rec/play/6JZ7JOus_T03E4HHtwSDBKR5W43ofKqs0HIe8vBZmEi0AXIBYVHwZbUWZOoqdmCi1TMVaF5q8032Aa6y * Password: 5t%0?%89 * https://zoom.us/rec/play/7pwscuD7rDM3SdeUsgSDUfUqW9W1fa6shCMWr_FfyxuwB3VSYAGuMuMbauLIooiFoOfRE4H73YZjsK8t * Password: 2g=!qIsg * https://zoom.us/rec/play/6Z0vdOj6pjo3E92S4gSDAaJ9W43oeP6s0ScYrvNZzEfmAHYGO1fzYLYRa-JJgWzoFa4qxGvNMt2XU0O9 * Password: 9v at LNk9E Please let me know if you have any issues accessing the videos. Thanks, Ildikó From nicolae.jascanu at intel.com Tue Jun 9 20:38:42 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Tue, 9 Jun 2020 20:38:42 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200608T175940Z Message-ID: Sanity Test from 2020-June-08 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200608T175940Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200608T175940Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) - was not used because it was reserved for regression testing Regards, STX Validation Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Wed Jun 10 03:32:07 2020 From: yong.hu at intel.com (Hu, Yong) Date: Wed, 10 Jun 2020 03:32:07 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Message-ID: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks!   Regards, Yong From taimoor.imtiaz at intel.com Wed Jun 10 07:32:03 2020 From: taimoor.imtiaz at intel.com (Imtiaz, Taimoor) Date: Wed, 10 Jun 2020 07:32:03 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG Message-ID: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> Hello Folks, I was going through the PTG discussions and etherpads and came across the topic of community and users. Although I’m a new-ish member of the community, I’d like to highlight some things we can also look at: Discussion Forums (Discourse, GitHub Discussions): We are using mailing lists for all discussions today. Most cloud-native projects are using Discourse forums (e.g. Kubernetes, Docker, LXC, LXD, LXCFS, etc. – virtually everyone in this space is part of a Discourse community. I want to double-stress this point actually). GitHub recently announced the Beta of Discussions. If STX is looking to build a community there, Discussions might be a nice, low-cost place to host the community. Besides this, many communities have Slack and Discord teams. But forums are infinitely more discoverable (if we’re not talking about ad-hoc discussions). Participation in other communities + Adoption Stories: We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. Fin’: In my personal experience: I’ve been using and talking about STX for the last 6 months. It is strange that for talking about STX internally, we’re using tools like MS Teams and Slack or Yammer/Discourse/PlanetBlue within our respective companies but the community has a 2nd class experience. In my opinion mailing lists and IRC are not the most modern way of managing large communities for modern, cloud-native projects. I’m sorry if this was already discussed some time ago and this is a repetition (Discourse has cool features to resolve these sorts of discussions btw. 😉) Best Taimoor Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Tue Jun 9 15:49:17 2020 From: yang.liu at windriver.com (Liu, Yang (YOW)) Date: Tue, 9 Jun 2020 15:49:17 +0000 Subject: [Starlingx-discuss] [ Test ] meeting notes - 06/09/2020 Message-ID: Agenda for 6/9/2020 Attendees: Yang, Ruediger, GeorgeP, Mihail, Oliver, Nicolae, Andrew · Sanity Status: * Build issue in week of June 2nd. Today's sanity is ongoing and looking good so far. * There was a discussion in vPTG to add force reboot controller test into sanity - it is currently in progress. ETA: end of this week. · stx4.0 testing: * Feature testing: § https://docs.google.com/spreadsheets/d/1C9n4aRQT7xMyTDCT5sfuZGNI9ermAX5BYRypzcCpQ6U/edit#gid=0 § Centos8: · Two issues/test cases unanswered - test team is not actively working on this. Will move back on this sometime this week. § Ceph - Rook - taken out from stx 4.0 - test activities paused § Upversion Openstack services used by flock components on host: · Test completed - feature spreadsheet updated. § Upgrade Containerized OpenStack to Ussuri (and OpenStack helm rebase) · Planned sanity is completed. IPv6 is not covered. · Suggest to cover all openstack components as a minimum - e.g., nova, neutron, cinder, telemetry, glance, heat, etc · Also should run some automated regression and update automation code if openstack client changed. · Nicolae will contact Zhipeng for latest load for Ussuri § Windows Active directory completed · Testing is completed with small add-on to support multiple-dex § Red fish virtual media support - testing completed § Kubernetes Upgrade Support · Completed - feature spreadsheet updated. § Kata Containers · Kata container test completed · Nicolae to check with designer on this issue: "Check PID namespaces" - ETA: end of this week. § TSN · Still setting it up - complicated setups o Best case scenario: setup complete this week. § B&R with etcd database · Feature testing completed, spreadsheet updated · Weekly based regression is done - simplex system is passing * Regression testing: § Regression started - Both teams are making good progress - will update the regression spreadsheet at end of week. § Some Robot tests don't have proper teardown - manually workaround it. e.g., reset mtu, deleting network, etc, some are affecting rest of the test cases · Long term plan is to switch to pytest § Stability issues encountered · Yang's team: leave host reboot tests to the end. · Nic's team: sometimes have to reinstall system o Saw issue in lock/unlock controller - passes in sanity, but sometimes fails in regression - need to report new LP if encountered again. § Telemetry test cases fail - openstack event list does not work, LP opened. § Will continue with Regression with latest green load - will wait for 0608's sanity results § Regression will be tight if Ussuri merges late · Open topic * Test automation § Need automated installation script · Robot framwork installation scripts are used in daily sanity with basic setup and provisioning - needs libvert and qemu installed o https://opendev.org/starlingx/test/src/branch/master/automated-robot-suite o Nic: Look into adding to Docs/Wiki after stx4.0. § Nic will publish the robot regression test cases to github · https://github.com/starlingx-staging/robot-tests § After stx4.0, put more effort on test automation · Nic: move some robot regression to pytest · Yang: automate new feature test cases * Test in the open § Yang: coming up with VM requirement to send to opendev · Will discuss with networking expert for interface requirement -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jun 10 13:20:37 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Jun 2020 09:20:37 -0400 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: Message-ID: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> CENGN cycles aren't a problem.  People resources is a challenge. So the ask is for a manual build, on CENGN, adding in the nine patches listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). .. and the addition of two repos to the build-stx-base.sh step build-stx-base.sh    --repo local-stx-build,... \    --repo stx-distro,... \    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ Is that correct? Scott On 2020-06-09 9:04 a.m., Saul Wold wrote: > > Frank, Scott, Davelet: > > Are there cycles available on Cengn (and people resources) to do a > Cengn build with the Ussuri patch set applied?  I know this is > different than a branch build.  I think we have done this kind of > thing in the past. > > This might help to make sure we don't have any more Cengn build issues > and could give the Test team a sanity spin with a Ussuri/Cengn build. > > Note there is a comment for Scott/Davelet at the bottom of Zhipeng's > email. > > Thanks >   Sau! > > > On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >> Hi all, >> >> So far, all block issues and concerns have been addressed. >> Since we have passed all sanity test, and Ussuri OpenStack has been >> officially released last month, >> there should be no more reason to block these patches merge. >> >> Next step: >> Let's push to get ussuri upgrade/openstack-helm rebasing patches >> merged. We need great help from core guys! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> >> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >> patch with workflow-1 and add depends-on for other patches as we need >> to merge them together.) >> Upgrade openstack-helm-infra zhipeng liu    >> starlingx/openstack-armada-app       workflow-1 >> Add mariadb database config override to support ipv6 zhipeng liu    >> starlingx/openstack-armada-app >> Fix render error in cinder during openstack-helm rebase zhipeng >> liu    starlingx/openstack-armada-app >> Update download list for openstack-helm upgrade zhipeng liu    >> starlingx/openstack-armada-app >> Update manifest.yaml file for openstack-helm upgrade.                >> zhipeng liu starlingx/openstack-armada-app >> Upgrade openstack-helm zhipeng liu    starlingx/openstack-armada-app >> >> # Below 3 patches is for OpenStack upgrade. >> Update manifest.yaml file for ussuri openstack                      >> YU CHENGDE starlingx/openstack-armada-app >> Modify build-tools and stable-wheels for Ussuri upgrading    YU >> CHENGDE    starlingx/root >> Upgrade openstack docker images for stable/ussuri        YU >> CHENGDE    starlingx/upstream >> >> >> After removing required python3 dependent packages from local, we can >> build out base image and OpenStack service images successfully with >> below command. >> =============================================================================== >> >> @Scott, please help to update cengn build script with below 2 >> additional repos and help to trigger image build >> build-stx-base.sh >>    --repo local-stx-build,... \ >>    --repo stx-distro,... \ >>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >> >> Thanks a lot! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月8日 16:54 >> To: 'Miller, Frank' ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> It is not easy to figure out whether/how/when OpenStack-helm-info >> upstream introduce this issue and then fix it. >> I also could not find any fix in LP[1], which just mentioned that >> this intermittent issue not hit us after some changes in related field. >> >> Anyhow, below 2 patches should fix potential bug and I could not see >> the same error log again in our ussuri upgrade EB. >> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >> state management thread death >> >> Since we have passed fully test, we'd better push to merge ussuri >> upgrade/openstack-helm rebasing patches soon. >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >> >> Thanks! >> Zhipeng >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月5日 22:32 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> This looks promising.  Your theory is that the 2 openstack-helm-infra >> patches will fix the mariadb recovery issues.  These 2 patches were >> merged in the openstack-helm-infra project in January and February of >> 2020.   What would be good to know is what broke mariadb recovery >> between April of 2019 when Chris Friesen finished up his story [1] >> and our current loads today.  The most likely explanation is the >> upversion of Train or the upversion to openstack-helm-infra done in >> November 2019 introduced the mariadb recovery issues.  And then the >> openstack-helm folks found and fixed the issue earlier in 2020. >> >> If we had more time the preferred approach would be to merge just the >> openstack-helm-infra changes first to prove they address mariadb >> recovery and then in a separate commit merge Ussuri.  But since you >> have validated that mariadb recovers with your Ussuri branch and this >> branch has these openstack-helm commits, I support letting Ussuri >> merge into stx.4.0. >> >> Frank >> [1] https://storyboard.openstack.org/#!/story/2004712 >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Friday, June 05, 2020 2:36 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> As for OpenStack not recovering after both controllers are reset [1] >> I could not reproduce this issue with my Ussuri upgrade EB. >> My test step is: >> 1) ssh to standby controller and sudo reboot -f for it. >> 2) sudo reboot -f for activated controller All pods can resume after >> a while. >> >> However, I could reproduce this issue with DB 20200516T080009Z. >>  From error logs,  it is an old issue analyzed by Chris Friesen in >> [2] early last year. >> >> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >> It includes below 2 patches which fixed this stability issue. >> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >> state management thread death >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 22:35 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> This is not a new requirement.  Users expect the software to recover >> when resets occur. >> >> As I had mentioned at the PTG yesterday I know personally that this >> test passed in stx3.0 before the upversion to train. Someone else who >> performs testing can look to determine when this test was done as >> part of feature testing after train was delivered as it should have >> been tested as part of stx.3.0 as well.  I do not know when this >> started to break.  One topic we will discuss at the PTG tomorrow will >> be how to improve our test coverage and automation so this type of >> issue can be found immediately as new code is being delivered. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Wednesday, June 03, 2020 10:28 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Frank, >> >> Have we pass this case before?  Is it a new requirement? >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 22:12 >> To: Miller, Frank ; Liu, ZhipengS >> ; starlingx-discuss at lists.starlingx.io; >> Church, Robert >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Yong/Zhipeng - the LP for openstack not recovering after both >> controllers are reset is >> https://bugs.launchpad.net/starlingx/+bug/1881899 >> >> Ovidiu is investigating and will provide any updates from his >> investigation.  Please continue to keep us informed of your >> investigation. >> >> Frank >> >> -----Original Message----- >> From: Miller, Frank >> Sent: Tuesday, June 02, 2020 10:38 PM >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> We used a build from May 28. >> >> As for the decoupling issue these are actively being worked.  If you >> run the system helm-override-show command when the stx-openstack app >> is applied you won’t see the CLI command fail.  It only fails when >> you try a helm-override-show when the app is in uploaded state.  In >> any case this will be fixed shortly. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Tuesday, June 02, 2020 10:04 PM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Thanks for your quick update! >> Which build are you using to test this case? >> Since decoupling commits introduced several regressions (at least >> 2),  not propose to do this kind of stability test with latest build. >> BTW, do we have plan to revert them considering this stability risk?  >> Our Ussuri upgrade patches is waiting for it☹ >> >> Furthermore, we have not seen this test case that force reboot both >> controllers at the same time. Is it a new requirement?  If not , have >> we pass this case before, which build? >> I'd like to help on it with the pass build for comparative analysis. >> From my point , mariadb might not work if we reboot both controllers. >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 8:55 >> To: Miller, Frank ; Liu, ZhipengS >> ; starlingx-discuss at lists.starlingx.io; >> Church, Robert >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> An update on our testing and analysis today.  We are able to >> reproduce the issue with OpenStack not recovering when we trigger a >> reboot of both AIO controllers at the same time.  This results in >> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >> openstack commands not working indefinitely after the controllers >> recover.  We'll create a launchpad tomorrow to track this issue. >> >> Frank >> >> -----Original Message----- >> From: Miller, Frank >> Sent: Tuesday, June 02, 2020 12:25 PM >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Thanks Zhipeng for the analysis.  What is challenging here is the >> multitude of issues. >> >> In our debug of openstack the past few days we are seeing the app >> fail completely.  After investigation this issue is a Day 1 >> containerd issue.  This is tracked in LP: >> https://bugs.launchpad.net/starlingx/+bug/1881353 >> >> The issue you are seeing on a swact is a new and very recent issue >> tied to the decoupling commits that were merged late last week.  Bob >> is investigating and I expect he'll have a fix soon for that. >> >> But the issues we are most concerned with are when we see mariadb >> crashing and not able to recover or with openstack services not >> working for longer periods of time.  We're attempting to isolate the >> sequence of events that trigger this. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Tuesday, June 02, 2020 11:47 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>              Unable to unlock controller after swact and lock w/ >> openstack applied I also tested with daily build 20200516T080009Z. >> However, it could not be reproduced. >> We should  fix this regression ASAP! >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月2日 16:48 >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank and all, >> >> Update for issue 2. >> I raised a new LP to track it. >> https://bugs.launchpad.net/starlingx/+bug/1881722 >> Below is the time statistics. It seems reasonable. No obvious issue >> found. >> 1) 3~4min for host restart and get ready. >> 2) 2~3min for mariadb terminating, initialization, get ready. (then >> configmap sync is ready) >> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a >> little, as it can retry quickly to connect ovs-vsctl: >> unix:/var/run/openvswitch/db.sock) >> 4) 1min for other pods ready, like neutron-ovs-agent which depends on >> ovs-db. ) Any comment? >> >> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>              Unable to unlock controller after swact and lock w/ >> openstack applied >>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>              system helm-override-show stx-openstack mariadb >> openstack crash  It seems related to openstack plugin decouple >> related patches. Should be a regression. >>   Please see our update in this 2 LPs for detail info.  @Bob, could >> you pls help further check it and your patches, thanks! >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月1日 16:20 >> To: 'Miller, Frank' ; >> 'starlingx-discuss at lists.starlingx.io' >> ; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> I also tested the issue 2 with latest daily build on duplex setup. >> The conclusion is that the issue is there all the time. >> This issue might not be fixed soon, but should not block OpenStack >> upgrade, right? >> >> For 9 OpenStack patches below, I have removed all workflow-1, except >> the first patch and add depends-on all them. >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> Your review and comments are welcome! >> >> As for issue 2, some detail info FYI. >> It also needs to wait for around 10 min before all pods are ready >> again after reboot for master build. >> It stuck on below 2 pods for 10 min. The same as the one I saw with >> my OpenStack upgrade engineering build. >>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >> openvswitch-db) >>       openvswitch-db-8fxkw >> Related key logs below. >>    Warning  FailedMount  2m19s              kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >> failed to sync secret cache: timed out waiting for the condition >>    Warning  FailedMount  2m19s              kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >> sync configmap cache: timed out waiting for the condition >>    Warning  FailedMount  105s               kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >> failed to sync secret cache: timed out waiting for the condition >>    Warning  FailedMount  105s               kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >> sync configmap cache: timed out waiting for the condition >>    Warning  Unhealthy    30s                kubelet, controller-1  >> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >> database connection failed (Permission denied) >>    Warning  Unhealthy    7s                 kubelet, controller-1  >> Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >> database connection failed (Permission denied) >> >> Is it the same stability issue as the one reported from your test >> team?  I can only see this issue after force rebooting. What is our >> expected recovery time? >> Your comment is appreciated! >> >> Thanks! >> Zhipeng >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月29日 9:42 >> To: 'Miller, Frank' ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Glad to see your quick reply!! >> For OpenStack upgrade task, we have finished all test and get patches >> ready for more than 2 weeks, but no any review comments and feedback >> from your side.  What's the next step? >> >> For issue # 2,  in community meeting notes,  I saw that you had some >> stability issue from WR local test team. But so far, I do not see any >> LP for the detail info. You should ask them to do that!  Right? >> >> According to your concern, I tried to reproduce it with my build >> (cherry pick OpenStack upgrade patches)yesterday, and the original >> issue [1] was not seen any more, mariadb got ready quickly, no >> regression. >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月29日 1:07 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Thanks Zhipeng. >> >> Good to see progress on IPv6. >> Waiting for 10 minutes for pods to recover isn't a good result. Is >> there a LP open on this issue?  Which pods are not ready? What can >> you tell us about this 10 minute outage? >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Thursday, May 28, 2020 5:06 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Nicolae already added test case description. Thanks Nicolae! >> >> I also did below test on AIO-DX virtual setup, exactly according to >> your mentioned steps. >> No issue found, but just need to wait for around 10 min before all >> pods are ready again after reboot. >> >> For ipv6 issue, I have submitted new patch for it since dynamic >> override for database config did not work. >>   https://review.opendev.org/#/c/731461/ >>   https://review.opendev.org/#/c/731470/ >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月27日 22:43 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> Thanks for the info.  You have provided the # of testcases but not >> what those testcase do.  Where can I find a description of what the >> OpenStack testcases do? >> >> For the controller reset testcases I'd like to see the test result >> for the following: >> Is openstack usable during the following scenarios on AIO-DX and on >> Standard configurations: >> - Lock/unlock of standby controller >> - reset (ie: reboot -f) of the standby controller >> - reset (ie: reboot -f) of the active controller >> - reapply of stx-openstack after the above scenarios >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Wednesday, May 27, 2020 9:15 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> We have done below tests. >> 1) Sanity tests by Nicolae. >> AIO - Simplex >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             49 TCs [PASS] >> Sanity Platform                 07 TCs [PASS] >> >> TOTAL: [ 61 TCs ] >> >> AIO - Duplex >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 07 TCs [PASS] >> >> TOTAL: [ 64 TCs ] >> >> Standard - Local Storage (2+2) >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 08 TCs [PASS] >> >> TOTAL: [ 65 TCs ] >> >> Standard External - Dedicated Storage (2+2+2) >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 09 TCs [PASS] >> >> TOTAL: [ 66 TCs ] >> >> 2) NFV scenario test by me >>      on duplex/multi standard virtual setup >>            duplex bare metal setup >> ===== Setup >> ================================================================================================================================= >> 2020-05-14 02:30:05.524  Create flavor small >> ........................................ [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >> .............................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_swap >> ................................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >> ......................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium >> ....................................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >> ............................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_swap >> .................................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >> ........................ [OKAY] >> 2020-05-14 02:30:05.653  Create image cirros >> ........................................ [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros >> ....................................... [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >> ............................. [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-swap >> .................................. [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >> ........................ [OKAY] >> 2020-05-14 02:30:05.695  Create volume empty_volume >> ................................. [OKAY] >> 2020-05-14 02:30:05.786  Create network internal >> .................................... [OKAY] >> 2020-05-14 02:30:06.158  Create network external >> .................................... [OKAY] >> 2020-05-14 02:30:06.772  Create subnet internal >> ..................................... [OKAY] >> 2020-05-14 02:30:07.661  Create subnet external >> ..................................... [OKAY] >> 2020-05-14 02:30:08.553  Create instance cirros-1 >> ................................... [OKAY] >> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >> ......................... [OKAY] >> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >> .............................. [OKAY] >> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1  >> .................... [OKAY] >> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >> ............................. [OKAY] >> 2020-05-14 02:31:21.241  Create instance cirros-image-with-volumes-1  >> ................ [OKAY] >> ============================================================================================================================================= >> ===== Test Iteration 0 (single-execution) >> =================================================================================================== >> 2020-05-14 02:33:04.172  Test Instance-Pause >> ........................................ [OKAY]  (2020-05-14 >> 02:33:18.078 Δ=0:00:12.870) >> 2020-05-14 02:33:35.073  Test Instance-Unpause >> ...................................... [OKAY]  (2020-05-14 >> 02:33:41.608 Δ=0:00:05.866) >> 2020-05-14 02:33:53.049  Test Instance-Suspend >> ...................................... [OKAY]  (2020-05-14 >> 02:33:59.546 Δ=0:00:05.792) >> 2020-05-14 02:34:11.103  Test Instance-Resume >> ....................................... [OKAY]  (2020-05-14 >> 02:34:17.756 Δ=0:00:05.937) >> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >> ................................ [OKAY]  (2020-05-14 02:36:45.923 >> Δ=0:02:15.748) >> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >> ................................ [OKAY]  (2020-05-14 02:37:14.504 >> Δ=0:00:11.704) >> 2020-05-14 02:37:30.673  Test Instance-Stop >> ......................................... [OKAY]  (2020-05-14 >> 02:38:44.543 Δ=0:01:13.220) >> 2020-05-14 02:39:00.481  Test Instance-Start >> ........................................ [OKAY]  (2020-05-14 >> 02:39:07.198 Δ=0:00:06.068) >> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >> ................................. [OKAY]  (2020-05-14 02:39:41.692 >> Δ=0:00:22.306) >> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >> ................................. [OKAY]  (2020-05-14 02:41:22.720 >> Δ=0:01:24.179) >> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >> ......................... [OKAY]  (2020-05-14 02:41:45.441 >> Δ=0:00:05.884) >> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >> .......................... [OKAY]  (2020-05-14 02:43:36.381 >> Δ=0:00:21.637) >> 2020-05-14 02:43:52.320  Test Instance-Resize >> ....................................... [OKAY]  (2020-05-14 >> 02:45:16.409 Δ=0:01:22.812) >> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >> ............................... [OKAY]  (2020-05-14 02:45:39.119 >> Δ=0:00:05.777) >> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >> ................................ [OKAY]  (2020-05-14 02:47:30.175 >> Δ=0:00:21.748) >> 2020-05-14 02:47:46.230  Test Instance-Rebuild >> ...................................... [OKAY]  (2020-05-14 >> 02:48:59.762 Δ=0:01:12.980) >> Total-Tests: 16     Execution-Time: 0:16:11.676 >> >> 3) Another 2 test >>      a) Using IPv6 >>           It can pass with workaround now.  I need one more fix for it. >>           In my previous patch https://review.opendev.org/#/c/716524 >> (merged), I dynamically override below >>              config_override: | >>                  [mysqld] >>                  bind_address=:: >>           However, it did not work now. From log,  it shows error >> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >> line: 1'" >>           I tried many methods, but could not remove the first line >> in 20-override.cnf >>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >> 20-override.cnf >>                  |- >>                  [mysqld] >>                  bind_address=:: >>          I can only add it in manifest.yaml as a static override like >> below. >>                 values: >>                    conf: >>                        database: >>                            config_override: | >>                                [mysqld] >>                                bind_address=:: >>                   b) Reset of controllers and check status of >> OpenStack while a controller is rebooting. >>           I have tested it and pass on simplex. >>           For duplex, I have a setup issue in my side. >>           @Jascanu, Nicolae  Could you help me on it for duplex test, >> if you have time today. Thanks! >> >> Zhipeng >> >> >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月26日 21:13 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> Can you publish the list of tests that have been run for openstack? >> >> Also has openstack been tested for the following scenarios: >> 1) Using IPv6 >> 2) Reset of controllers and check status of openstack while a >> controller is rebooting? >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Monday, May 25, 2020 3:14 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> We have passed all sanity test on all setup. Thanks Nicolae!! >> We also built out OpenStack service images from layered build >> environment. >> >> Please help to review and push below patches to be merged, thanks! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >> >> BRs >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月14日 16:49 >> To: 'Saul Wold' ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> Call for patch review again! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月9日 8:38 >> To: Saul Wold ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Agree! >> >> -----Original Message----- >> From: Saul Wold >> Sent: 2020年5月9日 0:29 >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> I would strengthen that to no changes until we get Green Sanity other >> than what's required to make them Green. >> >> Full Stop! >> >> Sau! >> >> >> On 5/8/20 9:05 AM, Miller, Frank wrote: >>> Until we can get sanity passing for several days in a row I strongly >>> suggest we do not allow any further changes into the load related to >>> OpenStack.  Folks can continue with reviews but let’s hold off >>> allowing merges related to a new OpenStack version. >>> >>> Frank >>> >>> *From:*Liu, ZhipengS >>> *Sent:* Friday, May 08, 2020 11:59 AM >>> *To:* starlingx-discuss >>> *Cc:* YU CHENGDE ; Penney, Don >>> >>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi all, >>> >>> Please help to review OpenStack Ussuri upgrade patches. >>> >>> Our target is to get all below patches merged by end of next week. >>> >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>> :merged) >>> >>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>> python3.6 for OpenStack services as ussuri release only support >>> python3. >>> >>> We also rebased openstack-helm/helm-infra to latest version. >>> >>> Engineering build test status. >>> >>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. >>> >>> Thanks! >>> >>> Zhipeng >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed Jun 10 13:23:25 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Jun 2020 09:23:25 -0400 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: Message-ID: <1f303d98-f809-e58c-7eb8-17d9ecb3bd69@windriver.com> I guest the question is where to publish the docker images. github, but with a 'ussuri' element added to the tag ? Scott On 2020-06-09 9:04 a.m., Saul Wold wrote: > > Frank, Scott, Davelet: > > Are there cycles available on Cengn (and people resources) to do a > Cengn build with the Ussuri patch set applied?  I know this is > different than a branch build.  I think we have done this kind of > thing in the past. > > This might help to make sure we don't have any more Cengn build issues > and could give the Test team a sanity spin with a Ussuri/Cengn build. > > Note there is a comment for Scott/Davelet at the bottom of Zhipeng's > email. > > Thanks >   Sau! > > > On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >> Hi all, >> >> So far, all block issues and concerns have been addressed. >> Since we have passed all sanity test, and Ussuri OpenStack has been >> officially released last month, >> there should be no more reason to block these patches merge. >> >> Next step: >> Let's push to get ussuri upgrade/openstack-helm rebasing patches >> merged. We need great help from core guys! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> >> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >> patch with workflow-1 and add depends-on for other patches as we need >> to merge them together.) >> Upgrade openstack-helm-infra zhipeng liu    >> starlingx/openstack-armada-app       workflow-1 >> Add mariadb database config override to support ipv6 zhipeng liu    >> starlingx/openstack-armada-app >> Fix render error in cinder during openstack-helm rebase zhipeng >> liu    starlingx/openstack-armada-app >> Update download list for openstack-helm upgrade zhipeng liu    >> starlingx/openstack-armada-app >> Update manifest.yaml file for openstack-helm upgrade.                >> zhipeng liu starlingx/openstack-armada-app >> Upgrade openstack-helm zhipeng liu    starlingx/openstack-armada-app >> >> # Below 3 patches is for OpenStack upgrade. >> Update manifest.yaml file for ussuri openstack                      >> YU CHENGDE starlingx/openstack-armada-app >> Modify build-tools and stable-wheels for Ussuri upgrading    YU >> CHENGDE    starlingx/root >> Upgrade openstack docker images for stable/ussuri        YU >> CHENGDE    starlingx/upstream >> >> >> After removing required python3 dependent packages from local, we can >> build out base image and OpenStack service images successfully with >> below command. >> =============================================================================== >> >> @Scott, please help to update cengn build script with below 2 >> additional repos and help to trigger image build >> build-stx-base.sh >>    --repo local-stx-build,... \ >>    --repo stx-distro,... \ >>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >> >> Thanks a lot! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月8日 16:54 >> To: 'Miller, Frank' ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> It is not easy to figure out whether/how/when OpenStack-helm-info >> upstream introduce this issue and then fix it. >> I also could not find any fix in LP[1], which just mentioned that >> this intermittent issue not hit us after some changes in related field. >> >> Anyhow, below 2 patches should fix potential bug and I could not see >> the same error log again in our ussuri upgrade EB. >> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >> state management thread death >> >> Since we have passed fully test, we'd better push to merge ussuri >> upgrade/openstack-helm rebasing patches soon. >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >> >> Thanks! >> Zhipeng >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月5日 22:32 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> This looks promising.  Your theory is that the 2 openstack-helm-infra >> patches will fix the mariadb recovery issues.  These 2 patches were >> merged in the openstack-helm-infra project in January and February of >> 2020.   What would be good to know is what broke mariadb recovery >> between April of 2019 when Chris Friesen finished up his story [1] >> and our current loads today.  The most likely explanation is the >> upversion of Train or the upversion to openstack-helm-infra done in >> November 2019 introduced the mariadb recovery issues.  And then the >> openstack-helm folks found and fixed the issue earlier in 2020. >> >> If we had more time the preferred approach would be to merge just the >> openstack-helm-infra changes first to prove they address mariadb >> recovery and then in a separate commit merge Ussuri.  But since you >> have validated that mariadb recovers with your Ussuri branch and this >> branch has these openstack-helm commits, I support letting Ussuri >> merge into stx.4.0. >> >> Frank >> [1] https://storyboard.openstack.org/#!/story/2004712 >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Friday, June 05, 2020 2:36 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> As for OpenStack not recovering after both controllers are reset [1] >> I could not reproduce this issue with my Ussuri upgrade EB. >> My test step is: >> 1) ssh to standby controller and sudo reboot -f for it. >> 2) sudo reboot -f for activated controller All pods can resume after >> a while. >> >> However, I could reproduce this issue with DB 20200516T080009Z. >>  From error logs,  it is an old issue analyzed by Chris Friesen in >> [2] early last year. >> >> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >> It includes below 2 patches which fixed this stability issue. >> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >> state management thread death >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 22:35 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> This is not a new requirement.  Users expect the software to recover >> when resets occur. >> >> As I had mentioned at the PTG yesterday I know personally that this >> test passed in stx3.0 before the upversion to train. Someone else who >> performs testing can look to determine when this test was done as >> part of feature testing after train was delivered as it should have >> been tested as part of stx.3.0 as well.  I do not know when this >> started to break.  One topic we will discuss at the PTG tomorrow will >> be how to improve our test coverage and automation so this type of >> issue can be found immediately as new code is being delivered. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Wednesday, June 03, 2020 10:28 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Frank, >> >> Have we pass this case before?  Is it a new requirement? >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 22:12 >> To: Miller, Frank ; Liu, ZhipengS >> ; starlingx-discuss at lists.starlingx.io; >> Church, Robert >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Yong/Zhipeng - the LP for openstack not recovering after both >> controllers are reset is >> https://bugs.launchpad.net/starlingx/+bug/1881899 >> >> Ovidiu is investigating and will provide any updates from his >> investigation.  Please continue to keep us informed of your >> investigation. >> >> Frank >> >> -----Original Message----- >> From: Miller, Frank >> Sent: Tuesday, June 02, 2020 10:38 PM >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> We used a build from May 28. >> >> As for the decoupling issue these are actively being worked.  If you >> run the system helm-override-show command when the stx-openstack app >> is applied you won’t see the CLI command fail.  It only fails when >> you try a helm-override-show when the app is in uploaded state.  In >> any case this will be fixed shortly. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Tuesday, June 02, 2020 10:04 PM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Thanks for your quick update! >> Which build are you using to test this case? >> Since decoupling commits introduced several regressions (at least >> 2),  not propose to do this kind of stability test with latest build. >> BTW, do we have plan to revert them considering this stability risk?  >> Our Ussuri upgrade patches is waiting for it☹ >> >> Furthermore, we have not seen this test case that force reboot both >> controllers at the same time. Is it a new requirement?  If not , have >> we pass this case before, which build? >> I'd like to help on it with the pass build for comparative analysis. >> From my point , mariadb might not work if we reboot both controllers. >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 8:55 >> To: Miller, Frank ; Liu, ZhipengS >> ; starlingx-discuss at lists.starlingx.io; >> Church, Robert >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> An update on our testing and analysis today.  We are able to >> reproduce the issue with OpenStack not recovering when we trigger a >> reboot of both AIO controllers at the same time.  This results in >> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >> openstack commands not working indefinitely after the controllers >> recover.  We'll create a launchpad tomorrow to track this issue. >> >> Frank >> >> -----Original Message----- >> From: Miller, Frank >> Sent: Tuesday, June 02, 2020 12:25 PM >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Thanks Zhipeng for the analysis.  What is challenging here is the >> multitude of issues. >> >> In our debug of openstack the past few days we are seeing the app >> fail completely.  After investigation this issue is a Day 1 >> containerd issue.  This is tracked in LP: >> https://bugs.launchpad.net/starlingx/+bug/1881353 >> >> The issue you are seeing on a swact is a new and very recent issue >> tied to the decoupling commits that were merged late last week.  Bob >> is investigating and I expect he'll have a fix soon for that. >> >> But the issues we are most concerned with are when we see mariadb >> crashing and not able to recover or with openstack services not >> working for longer periods of time.  We're attempting to isolate the >> sequence of events that trigger this. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Tuesday, June 02, 2020 11:47 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>              Unable to unlock controller after swact and lock w/ >> openstack applied I also tested with daily build 20200516T080009Z. >> However, it could not be reproduced. >> We should  fix this regression ASAP! >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月2日 16:48 >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank and all, >> >> Update for issue 2. >> I raised a new LP to track it. >> https://bugs.launchpad.net/starlingx/+bug/1881722 >> Below is the time statistics. It seems reasonable. No obvious issue >> found. >> 1) 3~4min for host restart and get ready. >> 2) 2~3min for mariadb terminating, initialization, get ready. (then >> configmap sync is ready) >> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a >> little, as it can retry quickly to connect ovs-vsctl: >> unix:/var/run/openvswitch/db.sock) >> 4) 1min for other pods ready, like neutron-ovs-agent which depends on >> ovs-db. ) Any comment? >> >> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>              Unable to unlock controller after swact and lock w/ >> openstack applied >>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>              system helm-override-show stx-openstack mariadb >> openstack crash  It seems related to openstack plugin decouple >> related patches. Should be a regression. >>   Please see our update in this 2 LPs for detail info.  @Bob, could >> you pls help further check it and your patches, thanks! >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月1日 16:20 >> To: 'Miller, Frank' ; >> 'starlingx-discuss at lists.starlingx.io' >> ; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> I also tested the issue 2 with latest daily build on duplex setup. >> The conclusion is that the issue is there all the time. >> This issue might not be fixed soon, but should not block OpenStack >> upgrade, right? >> >> For 9 OpenStack patches below, I have removed all workflow-1, except >> the first patch and add depends-on all them. >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> Your review and comments are welcome! >> >> As for issue 2, some detail info FYI. >> It also needs to wait for around 10 min before all pods are ready >> again after reboot for master build. >> It stuck on below 2 pods for 10 min. The same as the one I saw with >> my OpenStack upgrade engineering build. >>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >> openvswitch-db) >>       openvswitch-db-8fxkw >> Related key logs below. >>    Warning  FailedMount  2m19s              kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >> failed to sync secret cache: timed out waiting for the condition >>    Warning  FailedMount  2m19s              kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >> sync configmap cache: timed out waiting for the condition >>    Warning  FailedMount  105s               kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >> failed to sync secret cache: timed out waiting for the condition >>    Warning  FailedMount  105s               kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >> sync configmap cache: timed out waiting for the condition >>    Warning  Unhealthy    30s                kubelet, controller-1  >> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >> database connection failed (Permission denied) >>    Warning  Unhealthy    7s                 kubelet, controller-1  >> Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >> database connection failed (Permission denied) >> >> Is it the same stability issue as the one reported from your test >> team?  I can only see this issue after force rebooting. What is our >> expected recovery time? >> Your comment is appreciated! >> >> Thanks! >> Zhipeng >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月29日 9:42 >> To: 'Miller, Frank' ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Glad to see your quick reply!! >> For OpenStack upgrade task, we have finished all test and get patches >> ready for more than 2 weeks, but no any review comments and feedback >> from your side.  What's the next step? >> >> For issue # 2,  in community meeting notes,  I saw that you had some >> stability issue from WR local test team. But so far, I do not see any >> LP for the detail info. You should ask them to do that!  Right? >> >> According to your concern, I tried to reproduce it with my build >> (cherry pick OpenStack upgrade patches)yesterday, and the original >> issue [1] was not seen any more, mariadb got ready quickly, no >> regression. >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月29日 1:07 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Thanks Zhipeng. >> >> Good to see progress on IPv6. >> Waiting for 10 minutes for pods to recover isn't a good result. Is >> there a LP open on this issue?  Which pods are not ready? What can >> you tell us about this 10 minute outage? >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Thursday, May 28, 2020 5:06 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Nicolae already added test case description. Thanks Nicolae! >> >> I also did below test on AIO-DX virtual setup, exactly according to >> your mentioned steps. >> No issue found, but just need to wait for around 10 min before all >> pods are ready again after reboot. >> >> For ipv6 issue, I have submitted new patch for it since dynamic >> override for database config did not work. >>   https://review.opendev.org/#/c/731461/ >>   https://review.opendev.org/#/c/731470/ >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月27日 22:43 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> Thanks for the info.  You have provided the # of testcases but not >> what those testcase do.  Where can I find a description of what the >> OpenStack testcases do? >> >> For the controller reset testcases I'd like to see the test result >> for the following: >> Is openstack usable during the following scenarios on AIO-DX and on >> Standard configurations: >> - Lock/unlock of standby controller >> - reset (ie: reboot -f) of the standby controller >> - reset (ie: reboot -f) of the active controller >> - reapply of stx-openstack after the above scenarios >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Wednesday, May 27, 2020 9:15 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> We have done below tests. >> 1) Sanity tests by Nicolae. >> AIO - Simplex >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             49 TCs [PASS] >> Sanity Platform                 07 TCs [PASS] >> >> TOTAL: [ 61 TCs ] >> >> AIO - Duplex >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 07 TCs [PASS] >> >> TOTAL: [ 64 TCs ] >> >> Standard - Local Storage (2+2) >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 08 TCs [PASS] >> >> TOTAL: [ 65 TCs ] >> >> Standard External - Dedicated Storage (2+2+2) >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 09 TCs [PASS] >> >> TOTAL: [ 66 TCs ] >> >> 2) NFV scenario test by me >>      on duplex/multi standard virtual setup >>            duplex bare metal setup >> ===== Setup >> ================================================================================================================================= >> 2020-05-14 02:30:05.524  Create flavor small >> ........................................ [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >> .............................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_swap >> ................................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >> ......................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium >> ....................................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >> ............................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_swap >> .................................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >> ........................ [OKAY] >> 2020-05-14 02:30:05.653  Create image cirros >> ........................................ [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros >> ....................................... [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >> ............................. [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-swap >> .................................. [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >> ........................ [OKAY] >> 2020-05-14 02:30:05.695  Create volume empty_volume >> ................................. [OKAY] >> 2020-05-14 02:30:05.786  Create network internal >> .................................... [OKAY] >> 2020-05-14 02:30:06.158  Create network external >> .................................... [OKAY] >> 2020-05-14 02:30:06.772  Create subnet internal >> ..................................... [OKAY] >> 2020-05-14 02:30:07.661  Create subnet external >> ..................................... [OKAY] >> 2020-05-14 02:30:08.553  Create instance cirros-1 >> ................................... [OKAY] >> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >> ......................... [OKAY] >> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >> .............................. [OKAY] >> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1  >> .................... [OKAY] >> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >> ............................. [OKAY] >> 2020-05-14 02:31:21.241  Create instance cirros-image-with-volumes-1  >> ................ [OKAY] >> ============================================================================================================================================= >> ===== Test Iteration 0 (single-execution) >> =================================================================================================== >> 2020-05-14 02:33:04.172  Test Instance-Pause >> ........................................ [OKAY]  (2020-05-14 >> 02:33:18.078 Δ=0:00:12.870) >> 2020-05-14 02:33:35.073  Test Instance-Unpause >> ...................................... [OKAY]  (2020-05-14 >> 02:33:41.608 Δ=0:00:05.866) >> 2020-05-14 02:33:53.049  Test Instance-Suspend >> ...................................... [OKAY]  (2020-05-14 >> 02:33:59.546 Δ=0:00:05.792) >> 2020-05-14 02:34:11.103  Test Instance-Resume >> ....................................... [OKAY]  (2020-05-14 >> 02:34:17.756 Δ=0:00:05.937) >> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >> ................................ [OKAY]  (2020-05-14 02:36:45.923 >> Δ=0:02:15.748) >> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >> ................................ [OKAY]  (2020-05-14 02:37:14.504 >> Δ=0:00:11.704) >> 2020-05-14 02:37:30.673  Test Instance-Stop >> ......................................... [OKAY]  (2020-05-14 >> 02:38:44.543 Δ=0:01:13.220) >> 2020-05-14 02:39:00.481  Test Instance-Start >> ........................................ [OKAY]  (2020-05-14 >> 02:39:07.198 Δ=0:00:06.068) >> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >> ................................. [OKAY]  (2020-05-14 02:39:41.692 >> Δ=0:00:22.306) >> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >> ................................. [OKAY]  (2020-05-14 02:41:22.720 >> Δ=0:01:24.179) >> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >> ......................... [OKAY]  (2020-05-14 02:41:45.441 >> Δ=0:00:05.884) >> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >> .......................... [OKAY]  (2020-05-14 02:43:36.381 >> Δ=0:00:21.637) >> 2020-05-14 02:43:52.320  Test Instance-Resize >> ....................................... [OKAY]  (2020-05-14 >> 02:45:16.409 Δ=0:01:22.812) >> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >> ............................... [OKAY]  (2020-05-14 02:45:39.119 >> Δ=0:00:05.777) >> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >> ................................ [OKAY]  (2020-05-14 02:47:30.175 >> Δ=0:00:21.748) >> 2020-05-14 02:47:46.230  Test Instance-Rebuild >> ...................................... [OKAY]  (2020-05-14 >> 02:48:59.762 Δ=0:01:12.980) >> Total-Tests: 16     Execution-Time: 0:16:11.676 >> >> 3) Another 2 test >>      a) Using IPv6 >>           It can pass with workaround now.  I need one more fix for it. >>           In my previous patch https://review.opendev.org/#/c/716524 >> (merged), I dynamically override below >>              config_override: | >>                  [mysqld] >>                  bind_address=:: >>           However, it did not work now. From log,  it shows error >> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >> line: 1'" >>           I tried many methods, but could not remove the first line >> in 20-override.cnf >>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >> 20-override.cnf >>                  |- >>                  [mysqld] >>                  bind_address=:: >>          I can only add it in manifest.yaml as a static override like >> below. >>                 values: >>                    conf: >>                        database: >>                            config_override: | >>                                [mysqld] >>                                bind_address=:: >>                   b) Reset of controllers and check status of >> OpenStack while a controller is rebooting. >>           I have tested it and pass on simplex. >>           For duplex, I have a setup issue in my side. >>           @Jascanu, Nicolae  Could you help me on it for duplex test, >> if you have time today. Thanks! >> >> Zhipeng >> >> >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月26日 21:13 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> Can you publish the list of tests that have been run for openstack? >> >> Also has openstack been tested for the following scenarios: >> 1) Using IPv6 >> 2) Reset of controllers and check status of openstack while a >> controller is rebooting? >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Monday, May 25, 2020 3:14 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> We have passed all sanity test on all setup. Thanks Nicolae!! >> We also built out OpenStack service images from layered build >> environment. >> >> Please help to review and push below patches to be merged, thanks! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >> >> BRs >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月14日 16:49 >> To: 'Saul Wold' ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> Call for patch review again! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月9日 8:38 >> To: Saul Wold ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Agree! >> >> -----Original Message----- >> From: Saul Wold >> Sent: 2020年5月9日 0:29 >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> I would strengthen that to no changes until we get Green Sanity other >> than what's required to make them Green. >> >> Full Stop! >> >> Sau! >> >> >> On 5/8/20 9:05 AM, Miller, Frank wrote: >>> Until we can get sanity passing for several days in a row I strongly >>> suggest we do not allow any further changes into the load related to >>> OpenStack.  Folks can continue with reviews but let’s hold off >>> allowing merges related to a new OpenStack version. >>> >>> Frank >>> >>> *From:*Liu, ZhipengS >>> *Sent:* Friday, May 08, 2020 11:59 AM >>> *To:* starlingx-discuss >>> *Cc:* YU CHENGDE ; Penney, Don >>> >>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi all, >>> >>> Please help to review OpenStack Ussuri upgrade patches. >>> >>> Our target is to get all below patches merged by end of next week. >>> >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>> :merged) >>> >>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>> python3.6 for OpenStack services as ussuri release only support >>> python3. >>> >>> We also rebased openstack-helm/helm-infra to latest version. >>> >>> Engineering build test status. >>> >>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. >>> >>> Thanks! >>> >>> Zhipeng >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Jun 10 13:37:46 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 10 Jun 2020 06:37:46 -0700 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: <1f303d98-f809-e58c-7eb8-17d9ecb3bd69@windriver.com> References: <1f303d98-f809-e58c-7eb8-17d9ecb3bd69@windriver.com> Message-ID: <3704c353-2bf6-13a6-7311-2435cba8aaeb@linux.intel.com> To save an email the answer is yes to the previous email about the ask, would like a build with those nine patches applied. On 6/10/20 6:23 AM, Scott Little wrote: > I guest the question is where to publish the docker images. github, but > with a 'ussuri' element added to the tag ? > That would be great, if they can then be pulled properly, I guess, that might require an additional change somehow. I am not a helm expert on that. Sau! > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build issues >> and could give the Test team a sanity spin with a Ussuri/Cengn build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, >>> there should be no more reason to block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we need >>> to merge them together.) >>> Upgrade openstack-helm-infra zhipeng liu >>> starlingx/openstack-armada-app       workflow-1 >>> Add mariadb database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app >>> Fix render error in cinder during openstack-helm rebase zhipeng >>> liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app >>> Update manifest.yaml file for openstack-helm upgrade. zhipeng liu >>> starlingx/openstack-armada-app >>> Upgrade openstack-helm zhipeng liu    starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app >>> Modify build-tools and stable-wheels for Ussuri upgrading    YU >>> CHENGDE    starlingx/root >>> Upgrade openstack docker images for stable/ussuri        YU >>> CHENGDE    starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we can >>> build out base image and OpenStack service images successfully with >>> below command. >>> =============================================================================== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build >>> build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >>> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >>> state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 openstack-helm-infra >>> patches will fix the mariadb recovery issues.  These 2 patches were >>> merged in the openstack-helm-infra project in January and February of >>> 2020.   What would be good to know is what broke mariadb recovery >>> between April of 2019 when Chris Friesen finished up his story [1] >>> and our current loads today.  The most likely explanation is the >>> upversion of Train or the upversion to openstack-helm-infra done in >>> November 2019 introduced the mariadb recovery issues.  And then the >>> openstack-helm folks found and fixed the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just the >>> openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and this >>> branch has these openstack-helm commits, I support letting Ussuri >>> merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >>> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >>> state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else who >>> performs testing can look to determine when this test was done as >>> part of feature testing after train was delivered as it should have >>> been tested as part of stx.3.0 as well.  I do not know when this >>> started to break.  One topic we will discuss at the PTG tomorrow will >>> be how to improve our test coverage and automation so this type of >>> issue can be found immediately as new code is being delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked.  If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement?  If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time.  This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a >>> little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends on >>> ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get patches >>> ready for more than 2 weeks, but no any review comments and feedback >>> from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see any >>> LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             49 TCs [PASS] >>> Sanity Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ================================================================================================================================= >>> >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance cirros-image-with-volumes-1 >>> ................ [OKAY] >>> ============================================================================================================================================= >>> >>> ===== Test Iteration 0 (single-execution) >>> =================================================================================================== >>> >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override like >>> below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex test, >>> if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity other >>> than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I strongly >>>> suggest we do not allow any further changes into the load related to >>>> OpenStack.  Folks can continue with reviews but let’s hold off >>>> allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ian.Jolliffe at windriver.com Wed Jun 10 14:26:11 2020 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Wed, 10 Jun 2020 14:26:11 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG Message-ID: <98161FF9-DB73-4E3F-A4C8-D15D35A1E0A6@windriver.com> Hello Folks, I was going through the PTG discussions and etherpads and came across the topic of community and users. Although I’m a new-ish member of the community, I’d like to highlight some things we can also look at: Discussion Forums (Discourse, GitHub Discussions): We are using mailing lists for all discussions today. Most cloud-native projects are using Discourse forums (e.g. Kubernetes, Docker, LXC, LXD, LXCFS, etc. – virtually everyone in this space is part of a Discourse community. I want to double-stress this point actually). IJ >> I agree that Discourse is something we should look at – if you join the community call I am sure you would get some feedback. But perhaps it doesn’t work for your timezone. GitHub recently announced the Beta of Discussions. If STX is looking to build a community there, Discussions might be a nice, low-cost place to host the community. Besides this, many communities have Slack and Discord teams. But forums are infinitely more discoverable (if we’re not talking about ad-hoc discussions). Participation in other communities + Adoption Stories: We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. Fin’: In my personal experience: I’ve been using and talking about STX for the last 6 months. It is strange that for talking about STX internally, we’re using tools like MS Teams and Slack or Yammer/Discourse/PlanetBlue within our respective companies but the community has a 2nd class experience. In my opinion mailing lists and IRC are not the most modern way of managing large communities for modern, cloud-native projects. I’m sorry if this was already discussed some time ago and this is a repetition (Discourse has cool features to resolve these sorts of discussions btw. 😉) Best Taimoor Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jun 10 14:56:21 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 10 Jun 2020 14:56:21 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 10, 2020) In-Reply-To: References: Message-ID: >From today's call... * Standing Topics * Sanity * Green yesterday * issues earlier due to some Ussuri changes related to Python 3 and building container images - Yong & co. working with Scott on options to test this scenario * Gerrit Reviews in Need of Attention * https://review.opendev.org/#/q/topic:for_ussuri+(status:open) - reviews for OpenStack Ussuri Upgrade * https://review.opendev.org/#/c/731652 - fix for a HIGH LP. * https://review.opendev.org/#/c/728322/ - logmgmt upgrade to python3 * Topics for this Week * follow ups from vPTG * stx.4.0 MS-3 status * Scott's update on "a new way to test you package's dependencies" * http://lists.starlingx.io/pipermail/starlingx-discuss/2020-June/008828.html * Frank will update the guidance on things to check before committing to cite this tool * ARs from Previous Meetings * 5/27 * Build Team mimic what happens in the CENGN build locally * 6/10: in progress * Build Team look into issue with co-dependent commits * 6/10: seems like this is a real issue - workaround is to manually make sure that all co-dependent commits go in together * 5/20 * Saul/Scott review 0514 build break, update learnings/recommendations as appropriate * Scott work on how to make sure there's an ISO whether or not there's a change in the flock layer * 6/10: on to do list * Saul/Ian discuss presenting about StarlingX on one of the TIP open networking group meetings * 6/10: Brent did this! the TIP guys will kick the tires, haven't heard back from them yet * 4/15 * manually updating version info (Build team + Bart) * build team has a plan, see Apr 16 minutes at https://etherpad.opendev.org/p/stx-build * 6/10: this is in progress * follow up with OpenDev re: VM for running SX sanity pending QCOW2 image (Bill, Build) * added an item about QCOW2 image to the build team agenda * 6/10: this in progress * Open Requests for Help * Subcloud on a Virtual Machine (Alfredo Deluca): http://lists.starlingx.io/pipermail/starlingx-discuss/2020-June/008827.html * Bart will respond to Alfredo * ERROR when deploy stx-monitor (Rahmat Agung): http://lists.starlingx.io/pipermail/starlingx-discuss/2020-June/008824.html * Matt will respond to Rahmat From: Zvonar, Bill Sent: Tuesday, June 9, 2020 1:55 PM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (June 10, 2020) Hi all, reminder of tomorrow's TSC/Community call. Please feel free to add items to the agenda [0] for the Community call beforehand. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200610T1400 From Barton.Wensley at windriver.com Wed Jun 10 15:09:48 2020 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Wed, 10 Jun 2020 15:09:48 +0000 Subject: [Starlingx-discuss] Subcloud on a Virtual Machine In-Reply-To: References: Message-ID: Alfredo, We support installing StarlingX in VMs using either KVM or VirtualBox – see the instructions at https://docs.starlingx.io/deploy_install_guides/index.html. We don’t have instructions for installing StarlingX in OpenStack VMs. To do this you would likely want to generate a qcow2 image (using KVM or VirtualBox). I can’t help you with this and based on the lack of response on the list I don’t think others have done this either. If you figure this out it would be great if you could share your findings with the community. Bart From: Alfredo De Luca Sent: June 8, 2020 6:00 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Subcloud on a Virtual Machine Hi all. Any thoughts on this? Also has anyone ever tried this solution with StarlingX on Virtual Machine at all? Cheers On Wed, Jun 3, 2020 at 9:05 PM Alfredo De Luca > wrote: Hi all. For testing purposes we are trying to install a subcloud on a VM (Openstack to be precise) but we get a couple of errors as below. Booting from an ISO (STX 3.0) we get this 1. ERROR: Specified installation (sda) or boot (sda) device is invalid. then I supposed the ISO is looking for a device sda .. so we fixed that but then another issue occurred and the error now is 2. Disk "" given in clearpart command does not exist. Now I wonder if it is possible to install that on top of a VM and also what could it the fix for the second error. Any idea/clue? Cheers -- /Alfredo -- /Alfredo -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Jun 10 15:24:24 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 10 Jun 2020 08:24:24 -0700 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> Message-ID: On 6/10/20 12:32 AM, Imtiaz, Taimoor wrote: > Hello Folks, > > I was going through the PTG discussions and etherpads and came across > the topic of community and users. Although I’m a new-ish member of the > community, I’d like to highlight some things we can also look at: > > *Discussion Forums (Discourse, GitHub Discussions)*: > > We are using mailing lists for all discussions today. Most cloud-native > projects are using Discourse forums (e.g. Kubernetes > , Docker , > LXC, LXD, LXCFS , etc. – virtually > everyone in this space is part of a Discourse community. I want to > double-stress this point actually). > Your welcome to participate in those forums and report back if there are issues, but I don't think we want to maintain 2 communication channels, as I believe Discourse is both a forum and mailing list combined. I know this has come up in the past, StarlingX as part of the OpenStack Foundation chose to use IRC, as you point out below, IRC has been around for a long time and it's used by many, many Open Source project beyond just OpenStack. Please come and participate in the community call [0] on Wednesday mornings. Thanks for your input. Sau! [0] https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call > GitHub recently announced the Beta of Discussions > . > If STX is looking to build a community there, Discussions might be a > nice, low-cost place to host the community. > > Besides this, many communities have Slack and Discord teams. But forums > are infinitely more discoverable (if we’re not talking about ad-hoc > discussions). > > *Participation in other communities + Adoption Stories:* > > We need to be heavily present (announce CVEs, project updates etc.) in > the Kubernetes discussion forums and Slack. > > CNCF has regular posts from adopters of Kubernetes. If we have users who > have adopted STX for their edge, we should invite their architect to > promote their company’s blogpost on CNCF’s blog. I think it’s great > promotion for the user’s product and for the STX community. > > *Fin’:* > > In my personal experience: I’ve been using and talking about STX for the > last 6 months. It is strange that for talking about STX internally, > we’re using tools like MS Teams and Slack or Yammer/Discourse/PlanetBlue > within our respective companies but the community has a 2^nd class > experience. > > In my opinion mailing lists and IRC are not the most modern way of > managing large communities for modern, cloud-native projects. I’m sorry > if this was already discussed some time ago and this is a repetition > (Discourse has cool features to resolve these sorts of discussions btw. 😉) > > Best > > Taimoor > > Intel Deutschland GmbH > Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany > Tel: +49 89 99 8853-0, www.intel.de > Managing Directors: Christin Eisenschmid, Gary Kershaw > Chairperson of the Supervisory Board: Nicole Lau > Registered Office: Munich > Commercial Register: Amtsgericht Muenchen HRB 186928 > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From fungi at yuggoth.org Wed Jun 10 16:01:44 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jun 2020 16:01:44 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> Message-ID: <20200610160144.sgdw7evnrvfw6jna@yuggoth.org> On 2020-06-10 08:24:24 -0700 (-0700), Saul Wold wrote: [...] > Your welcome to participate in those forums and report back if > there are issues, but I don't think we want to maintain 2 > communication channels, as I believe Discourse is both a forum and > mailing list combined. [...] Having struggled repeatedly to interact with Discourse via E-mail, I can say that it's not really a mailing list. It has some features to feed you posts via E-mail and accept replies, but that is where the similarity to a traditional listserv ends. I've been exploring upgrading our Mailman servers the newer 3.x series which enables a lot of Web forum like workflows (via Hyperkitty), but can also say that it turns mailing lists into Web forums about as well as Discourse turns Web forums into mailing lists (that is to say, probably not sufficiently for folks who are seeking a real "Web forum experience"). Personally, I miss Usenet, and wish I had sufficient time to work on adding an NNTP connector for our lists. But the long as short of it is that communities use different tools to communicate, and as someone who participates in lots of diverse communities I've had to learn to do so with a wide variety of tools. Choice of communication tooling is not what makes or breaks a community, and spending too much time jumping back and forth between popular communication platforms of the day serves mostly to eat effort which could otherwise be spent improving software the community is there to produce and maintain. That the Linux kernel developers continue to use mailing lists for discussion, and even for sharing and reviewing Git commits, has not resulted in the death of their community (quite the contrary). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ildiko.vancsa at gmail.com Wed Jun 10 16:18:48 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 10 Jun 2020 18:18:48 +0200 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> Message-ID: <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> Hi Taimoor, I agree with the previous responses regarding the communication tool comments and would reflect on the blog and information sharing topic here. […] > Participation in other communities + Adoption Stories: > We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. > > CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. […] You may not be aware, but on the StarlingX website we have a blog section where we are actively looking for new content: https://www.starlingx.io/blog/ If you or anyone else has an adoption story, demo, or any other cool topic to share details about please share it on the community’s blog. You can add pointers to these blog posts from anywhere including the CNCF sites which helps with further increasing visibility of the project and get new content in front of those who are monitoring the blog for new stories. Anyone can suggest a new post on GitHub in the form of a pull request: https://github.com/StarlingXWeb/starlingx-website/tree/master/src/pages/blog If you need help with putting your blog post together please reach out to me and I’m happy to help reviewing and polishing the text or upload it to GitHub if you have issues with that. Thanks, Ildikó From chris.friesen at windriver.com Wed Jun 10 17:05:30 2020 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 10 Jun 2020 11:05:30 -0600 Subject: [Starlingx-discuss] new docker image referenced by starlingx Message-ID: Hi all, Just a heads-up that with https://review.opendev.org/#/c/731831 merged the initial ansible playbook will try to pull the starlingx/n3000-opae:stx.4.0-v1.0.0 Docker image as listed at https://hub.docker.com/r/starlingx/n3000-opae/tags Anyone using a manually-managed Docker image registry will need to add this image. Thanks, Chris From Matt.Peters at windriver.com Wed Jun 10 17:36:13 2020 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 10 Jun 2020 17:36:13 +0000 Subject: [Starlingx-discuss] ERROR when deploy stx-monitor. In-Reply-To: References: Message-ID: <7172B806-C4DE-4992-AD29-DDC34F76E295@windriver.com> Hi Rahmat, The stx-monitor Armada application is not being actively maintained within since there wasn’t much interest from the community in continuing to support it. The individual container services can still be deployed using Helm on StarlingX if you require. There are also several other projects within the CNCF landscape for monitoring that can also be considered. https://landscape.cncf.io/category=observability-and-analysis&format=card-mode&grouping=category I hope that answers your question. Regards, Matt From: Rahmat Agung Date: Sunday, June 7, 2020 at 10:34 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] ERROR when deploy stx-monitor. I try to deploy stx-monitor on 3 nworker nodes with label like this: ``` worker-3 Ready 2d18h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-client=enabled,elastic-controller=enabled,elastic-data=enabled,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-3,kubernetes.io/os=linux worker-4 Ready 2d18h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-client=enabled,elastic-controller=enabled,elastic-data=enabled,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-4,kubernetes.io/os=linux worker-5 Ready 2d16h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-5,kubernetes.io/os=linux ``` When I check logs: ``` us: <_Rendezvous of RPC that terminated with: status = StatusCode.UNKNOWN details = "release mon-kibana failed: timed out waiting for the condition" debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" > 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller Traceback (most recent call last): 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 473, in install_release 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller metadata=self.metadata) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller details = "release mon-kibana failed: timed out waiting for the condition" 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller > 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller 2020-06-07 14:07:21.199 7963 DEBUG armada.handlers.tiller [-] [chart=kibana]: Helm getting release status for release=mon-kibana, version=0 get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539 2020-06-07 14:07:21.402 7963 DEBUG armada.handlers.tiller [-] [chart=kibana]: GetReleaseStatus= name: "mon-kibana" info { status { code: FAILED } first_deployed { seconds: 1591538240 nanos: 977775758 } last_deployed { seconds: 1591538240 nanos: 977775758 } Description: "Release \"mon-kibana\" failed: timed out waiting for the condition" } namespace: "monitor" get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:547 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada [-] Chart deploy [kibana] failed: armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: mon-kibana - Tiller Message: b'Release "mon-kibana" failed: timed out waiting for the condition' 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada Traceback (most recent call last): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 473, in install_release 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada metadata=self.metadata) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada return _end_unary_response_blocking(state, call, False, None) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada raise _Rendezvous(state, None, None, deadline) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada status = StatusCode.UNKNOWN 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada details = "release mon-kibana failed: timed out waiting for the condition" 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada > 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada During handling of the above exception, another exception occurred: 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada Traceback (most recent call last): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 225, in handle_result 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada result = get_result() 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 236, in 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada if (handle_result(chart, lambda: deploy_chart(chart))): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 214, in deploy_chart 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada chart, cg_test_all_charts, prefix, known_releases) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 239, in execute 2020-06-07 14:07[402248.574350] serial8250: too much work for irq4 :21.404 7963 ERROR armada.handlers.armada timeout=timer) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 486, in install_release 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada raise ex.ReleaseException(release, status, 'Install') 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: mon-kibana - Tiller Message: b'Release "mon-kibana" failed: timed out waiting for the condition' 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.406 7963 ERROR armada.handlers.armada [-] Chart deploy(s) failed: ['kibana'] 2020-06-07 14:07:21.478 7963 INFO armada.handlers.lock [-] Releasing lock 2020-06-07 14:07:21.486 7963 ERROR armada.cli [-] Caught internal exception: armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kibana'] 2020-06-07 14:07:21.486 7963 ERROR armada.cli Traceback (most recent call last): 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke 2020-06-07 14:07:21.486 7963 ERROR armada.cli self.invoke() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 213, in invoke 2020-06-07 14:07:21.486 7963 ERROR armada.cli resp = self.handle(documents, tiller) 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper 2020-06-07 14:07:21.486 7963 ERROR armada.cli return future.result() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2020-06-07 14:07:21.486 7963 ERROR armada.cli return self.__get_result() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2020-06-07 14:07:21.486 7963 ERROR armada.cli raise self._exception 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2020-06-07 14:07:21.486 7963 ERROR armada.cli result = self.fn(*self.args, **self.kwargs) 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 256, in handle 2020-06-07 14:07:21.486 7963 ERROR armada.cli return armada.sync() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 252, in sync 2020-06-07 14:07:21.486 7963 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2020-06-07 14:07:21.486 7963 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kibana'] 2020-06-07 14:07:21.486 7963 ERROR armada.cli ``` What mean the error above? I just want to know, is stx-monitor stable or still experimental? Because I could not found documentation about it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Wed Jun 10 18:33:54 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 10 Jun 2020 18:33:54 +0000 Subject: [Starlingx-discuss] Weekly Build meeting on Friday at 15:00 UTC Message-ID: For this week only the StarlingX Build meeting is moving to Friday morning: 15:00 UTC 11:00 EST 08:00 PT Etherpad: https://etherpad.openstack.org/p/stx-build Zoom bridge: https://zoom.us/j/342730236 Frank Build PL -------------- next part -------------- An HTML attachment was scrubbed... URL: From taimoor.imtiaz at intel.com Wed Jun 10 18:50:05 2020 From: taimoor.imtiaz at intel.com (Imtiaz, Taimoor) Date: Wed, 10 Jun 2020 18:50:05 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> Message-ID: <9b55496eddfb47178a8023ba59e6ccd5@intel.com> Hi Ildiko, Saul, Sure, I do not disagree that mailing lists are functional. Discourse is so much more welcoming and information is easy to discover. If you monitor a community forum such as Kubernetes', you'll see people having fun too (showing off projects etc.). It's also a bit more realtime and meant for threaded discussions. It was just a suggestion on my end. I do not think we should compare cloud native communities with Linux. The stewards are different generations of folks and mindset are totally different. In my observation, most people do not go through the hassle of registering on mailing lists. They do however like browsing forums (I know I do).. SEO tooling also likes it 😊 I agree with the blog post idea and I'll try to get some users to write those. These things came to mind after listening to the 2nd day's PTG recordings where there was a discussion around community adoption. Best, Taimoor -----Original Message----- From: Ildiko Vancsa Sent: Wednesday, June 10, 2020 18:19 To: Imtiaz, Taimoor Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Adoption discussions in PTG Hi Taimoor, I agree with the previous responses regarding the communication tool comments and would reflect on the blog and information sharing topic here. […] > Participation in other communities + Adoption Stories: > We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. > > CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. […] You may not be aware, but on the StarlingX website we have a blog section where we are actively looking for new content: https://www.starlingx.io/blog/ If you or anyone else has an adoption story, demo, or any other cool topic to share details about please share it on the community’s blog. You can add pointers to these blog posts from anywhere including the CNCF sites which helps with further increasing visibility of the project and get new content in front of those who are monitoring the blog for new stories. Anyone can suggest a new post on GitHub in the form of a pull request: https://github.com/StarlingXWeb/starlingx-website/tree/master/src/pages/blog If you need help with putting your blog post together please reach out to me and I’m happy to help reviewing and polishing the text or upload it to GitHub if you have issues with that. Thanks, Ildikó Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 From bruce.e.jones at intel.com Wed Jun 10 20:05:22 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 10 Jun 2020 20:05:22 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <9b55496eddfb47178a8023ba59e6ccd5@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> <9b55496eddfb47178a8023ba59e6ccd5@intel.com> Message-ID: Taimoor, thank you for sharing your thoughts on these topics. Earlier in the thread you said that we should be participating actively in CNCF communication forums/etc.. - posting news, questions, etc.. I absolutely agree with that, but don't have much time myself to do so. Perhaps someone in the community could volunteer to (or may already) represent the project in those places? brucej -----Original Message----- From: Imtiaz, Taimoor Sent: Wednesday, June 10, 2020 11:50 AM To: Ildiko Vancsa Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Adoption discussions in PTG Hi Ildiko, Saul, Sure, I do not disagree that mailing lists are functional. Discourse is so much more welcoming and information is easy to discover. If you monitor a community forum such as Kubernetes', you'll see people having fun too (showing off projects etc.). It's also a bit more realtime and meant for threaded discussions. It was just a suggestion on my end. I do not think we should compare cloud native communities with Linux. The stewards are different generations of folks and mindset are totally different. In my observation, most people do not go through the hassle of registering on mailing lists. They do however like browsing forums (I know I do).. SEO tooling also likes it 😊 I agree with the blog post idea and I'll try to get some users to write those. These things came to mind after listening to the 2nd day's PTG recordings where there was a discussion around community adoption. Best, Taimoor -----Original Message----- From: Ildiko Vancsa Sent: Wednesday, June 10, 2020 18:19 To: Imtiaz, Taimoor Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Adoption discussions in PTG Hi Taimoor, I agree with the previous responses regarding the communication tool comments and would reflect on the blog and information sharing topic here. […] > Participation in other communities + Adoption Stories: > We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. > > CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. […] You may not be aware, but on the StarlingX website we have a blog section where we are actively looking for new content: https://www.starlingx.io/blog/ If you or anyone else has an adoption story, demo, or any other cool topic to share details about please share it on the community’s blog. You can add pointers to these blog posts from anywhere including the CNCF sites which helps with further increasing visibility of the project and get new content in front of those who are monitoring the blog for new stories. Anyone can suggest a new post on GitHub in the form of a pull request: https://github.com/StarlingXWeb/starlingx-website/tree/master/src/pages/blog If you need help with putting your blog post together please reach out to me and I’m happy to help reviewing and polishing the text or upload it to GitHub if you have issues with that. Thanks, Ildikó Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From fungi at yuggoth.org Wed Jun 10 20:14:41 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jun 2020 20:14:41 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <9b55496eddfb47178a8023ba59e6ccd5@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> <9b55496eddfb47178a8023ba59e6ccd5@intel.com> Message-ID: <20200610201440.t2c334s43qtwpsqk@yuggoth.org> On 2020-06-10 18:50:05 +0000 (+0000), Imtiaz, Taimoor wrote: > Sure, I do not disagree that mailing lists are functional. > Discourse is so much more welcoming and information is easy to > discover. "Welcoming" and "easy to discover" are matters of personal taste, and so differ widely based on individual experience. De gustibus non est disputandum. > If you monitor a community forum such as Kubernetes', you'll see > people having fun too (showing off projects etc.). It's also a bit > more realtime and meant for threaded discussions. E-mail and thus mailing lists are also explicitly designed for threaded discussions, unless you've decided to cripple your communications by using a terrible mail client. My client shows me thread trees of list messages just fine. > It was just a suggestion on my end. I do not think we should > compare cloud native communities with Linux. The stewards are > different generations of folks and mindset are totally different. I hesitate to ascribe ageist generalizations to communication tooling preferences. Are you suggesting that the Linux kernel doesn't have younger developers? Or that Kubernetes doesn't have older developers? What is specific to the Linux maintainer "mindset" which differentiates it from the Kubernetes maintainer "mindset" in this regard? > In my observation, most people do not go through the hassle of > registering on mailing lists. They do however like browsing forums > (I know I do).. I have no problem subscribing to mailing lists, in fact I'm subscribed to many. I much prefer getting messages in my inbox and not having to go check a dozen different Web sites to read new forum posts for discussions in which I'm involved/interested. To be honest, I'd rather not start up a Web browser at all when I can help it. > SEO tooling also likes it 😊 [...] Have any details on this? Popular Web search engines already crawl and index our list archives, and turn up relevant results from them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From nicolae.jascanu at intel.com Wed Jun 10 20:21:55 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Wed, 10 Jun 2020 20:21:55 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200610T020226Z Message-ID: Sanity Test from 2020-June-10 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200610T020226Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200610T020226Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jun 10 20:28:07 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Jun 2020 16:28:07 -0400 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, >>> there should be no more reason to block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) >>> Upgrade openstack-helm-infra zhipeng liu >>> starlingx/openstack-armada-app       workflow-1 >>> Add mariadb database config override to support ipv6 zhipeng liu    >>> starlingx/openstack-armada-app >>> Fix render error in cinder during openstack-helm rebase zhipeng >>> liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app >>> Update manifest.yaml file for openstack-helm upgrade.                >>> zhipeng liu starlingx/openstack-armada-app >>> Upgrade openstack-helm zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack                      >>> YU CHENGDE starlingx/openstack-armada-app >>> Modify build-tools and stable-wheels for Ussuri upgrading YU >>> CHENGDE    starlingx/root >>> Upgrade openstack docker images for stable/ussuri        YU >>> CHENGDE    starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> =============================================================================== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build >>> build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues.  >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability >>> risk?  Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1  >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1  >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1  >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1  >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1  >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1  >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             49 TCs [PASS] >>> Sanity Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ================================================================================================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1  >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ============================================================================================================================================= >>> ===== Test Iteration 0 (single-execution) >>> =================================================================================================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I strongly >>>> suggest we do not allow any further changes into the load related to >>>> OpenStack.  Folks can continue with reviews but let’s hold off >>>> allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Jun 10 21:51:15 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 10 Jun 2020 14:51:15 -0700 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: <68b629aa-9809-2462-e50e-6b2a5e9c71aa@linux.intel.com> Scott, Can you provide details of what you need and what we need to do on Jenkins for Davlet and I to work on it tomorrow when your off-line? I am guessing you need a set of merged branches someplace that we can point jenkins at, but if that needs to be on Cengn or someplace else? Thanks Sau! On 6/10/20 1:28 PM, Scott Little wrote: > Six of the nine updates are in a state of merge conflict. > > Please resolve the conflicts so that I can make progress wit a CENGN build. > > Scott > > > > On 2020-06-10 9:20 a.m., Scott Little wrote: >> CENGN cycles aren't a problem.  People resources is a challenge. >> >> So the ask is for a manual build, on CENGN, adding in the nine patches >> listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). >> >> .. and the addition of two repos to the build-stx-base.sh step >> >> build-stx-base.sh >>    --repo local-stx-build,... \ >>    --repo stx-distro,... \ >>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >> >> >> Is that correct? >> >> Scott >> >> >> On 2020-06-09 9:04 a.m., Saul Wold wrote: >>> >>> Frank, Scott, Davelet: >>> >>> Are there cycles available on Cengn (and people resources) to do a >>> Cengn build with the Ussuri patch set applied?  I know this is >>> different than a branch build.  I think we have done this kind of >>> thing in the past. >>> >>> This might help to make sure we don't have any more Cengn build >>> issues and could give the Test team a sanity spin with a Ussuri/Cengn >>> build. >>> >>> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >>> email. >>> >>> Thanks >>>   Sau! >>> >>> >>> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>>> Hi all, >>>> >>>> So far, all block issues and concerns have been addressed. >>>> Since we have passed all sanity test, and Ussuri OpenStack has been >>>> officially released last month, >>>> there should be no more reason to block these patches merge. >>>> >>>> Next step: >>>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>>> merged. We need great help from core guys! >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>>> >>>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>>> patch with workflow-1 and add depends-on for other patches as we >>>> need to merge them together.) >>>> Upgrade openstack-helm-infra zhipeng liu >>>> starlingx/openstack-armada-app       workflow-1 >>>> Add mariadb database config override to support ipv6 zhipeng liu >>>> starlingx/openstack-armada-app >>>> Fix render error in cinder during openstack-helm rebase zhipeng >>>> liu    starlingx/openstack-armada-app >>>> Update download list for openstack-helm upgrade zhipeng liu >>>> starlingx/openstack-armada-app >>>> Update manifest.yaml file for openstack-helm upgrade. zhipeng liu >>>> starlingx/openstack-armada-app >>>> Upgrade openstack-helm zhipeng liu starlingx/openstack-armada-app >>>> >>>> # Below 3 patches is for OpenStack upgrade. >>>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>>> starlingx/openstack-armada-app >>>> Modify build-tools and stable-wheels for Ussuri upgrading YU >>>> CHENGDE    starlingx/root >>>> Upgrade openstack docker images for stable/ussuri        YU >>>> CHENGDE    starlingx/upstream >>>> >>>> >>>> After removing required python3 dependent packages from local, we >>>> can build out base image and OpenStack service images successfully >>>> with below command. >>>> =============================================================================== >>>> >>>> @Scott, please help to update cengn build script with below 2 >>>> additional repos and help to trigger image build >>>> build-stx-base.sh >>>>    --repo local-stx-build,... \ >>>>    --repo stx-distro,... \ >>>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>>>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>>> >>>> Thanks a lot! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年6月8日 16:54 >>>> To: 'Miller, Frank' ; >>>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> It is not easy to figure out whether/how/when OpenStack-helm-info >>>> upstream introduce this issue and then fix it. >>>> I also could not find any fix in LP[1], which just mentioned that >>>> this intermittent issue not hit us after some changes in related field. >>>> >>>> Anyhow, below 2 patches should fix potential bug and I could not see >>>> the same error log again in our ussuri upgrade EB. >>>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>>> avoid state management thread death >>>> >>>> Since we have passed fully test, we'd better push to merge ussuri >>>> upgrade/openstack-helm rebasing patches soon. >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>>> >>>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>>> >>>> Thanks! >>>> Zhipeng >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月5日 22:32 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> This looks promising.  Your theory is that the 2 >>>> openstack-helm-infra patches will fix the mariadb recovery issues. >>>> These 2 patches were merged in the openstack-helm-infra project in >>>> January and February of 2020.   What would be good to know is what >>>> broke mariadb recovery between April of 2019 when Chris Friesen >>>> finished up his story [1] and our current loads today.  The most >>>> likely explanation is the upversion of Train or the upversion to >>>> openstack-helm-infra done in November 2019 introduced the mariadb >>>> recovery issues.  And then the openstack-helm folks found and fixed >>>> the issue earlier in 2020. >>>> >>>> If we had more time the preferred approach would be to merge just >>>> the openstack-helm-infra changes first to prove they address mariadb >>>> recovery and then in a separate commit merge Ussuri.  But since you >>>> have validated that mariadb recovers with your Ussuri branch and >>>> this branch has these openstack-helm commits, I support letting >>>> Ussuri merge into stx.4.0. >>>> >>>> Frank >>>> [1] https://storyboard.openstack.org/#!/story/2004712 >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Friday, June 05, 2020 2:36 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> As for OpenStack not recovering after both controllers are reset [1] >>>> I could not reproduce this issue with my Ussuri upgrade EB. >>>> My test step is: >>>> 1) ssh to standby controller and sudo reboot -f for it. >>>> 2) sudo reboot -f for activated controller All pods can resume after >>>> a while. >>>> >>>> However, I could reproduce this issue with DB 20200516T080009Z. >>>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>>> [2] early last year. >>>> >>>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>>> It includes below 2 patches which fixed this stability issue. >>>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>>> avoid state management thread death >>>> >>>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月3日 22:35 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> This is not a new requirement.  Users expect the software to recover >>>> when resets occur. >>>> >>>> As I had mentioned at the PTG yesterday I know personally that this >>>> test passed in stx3.0 before the upversion to train. Someone else >>>> who performs testing can look to determine when this test was done >>>> as part of feature testing after train was delivered as it should >>>> have been tested as part of stx.3.0 as well.  I do not know when >>>> this started to break.  One topic we will discuss at the PTG >>>> tomorrow will be how to improve our test coverage and automation so >>>> this type of issue can be found immediately as new code is being >>>> delivered. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Wednesday, June 03, 2020 10:28 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Frank, >>>> >>>> Have we pass this case before?  Is it a new requirement? >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月3日 22:12 >>>> To: Miller, Frank ; Liu, ZhipengS >>>> ; starlingx-discuss at lists.starlingx.io; >>>> Church, Robert >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Yong/Zhipeng - the LP for openstack not recovering after both >>>> controllers are reset is >>>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>>> >>>> Ovidiu is investigating and will provide any updates from his >>>> investigation.  Please continue to keep us informed of your >>>> investigation. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: Tuesday, June 02, 2020 10:38 PM >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> We used a build from May 28. >>>> >>>> As for the decoupling issue these are actively being worked. If you >>>> run the system helm-override-show command when the stx-openstack app >>>> is applied you won’t see the CLI command fail.  It only fails when >>>> you try a helm-override-show when the app is in uploaded state.  In >>>> any case this will be fixed shortly. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Tuesday, June 02, 2020 10:04 PM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> Thanks for your quick update! >>>> Which build are you using to test this case? >>>> Since decoupling commits introduced several regressions (at least >>>> 2),  not propose to do this kind of stability test with latest build. >>>> BTW, do we have plan to revert them considering this stability >>>> risk?  Our Ussuri upgrade patches is waiting for it☹ >>>> >>>> Furthermore, we have not seen this test case that force reboot both >>>> controllers at the same time. Is it a new requirement? If not , have >>>> we pass this case before, which build? >>>> I'd like to help on it with the pass build for comparative analysis. >>>> From my point , mariadb might not work if we reboot both controllers. >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月3日 8:55 >>>> To: Miller, Frank ; Liu, ZhipengS >>>> ; starlingx-discuss at lists.starlingx.io; >>>> Church, Robert >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> An update on our testing and analysis today.  We are able to >>>> reproduce the issue with OpenStack not recovering when we trigger a >>>> reboot of both AIO controllers at the same time. This results in >>>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>>> openstack commands not working indefinitely after the controllers >>>> recover.  We'll create a launchpad tomorrow to track this issue. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: Tuesday, June 02, 2020 12:25 PM >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Thanks Zhipeng for the analysis.  What is challenging here is the >>>> multitude of issues. >>>> >>>> In our debug of openstack the past few days we are seeing the app >>>> fail completely.  After investigation this issue is a Day 1 >>>> containerd issue.  This is tracked in LP: >>>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>>> >>>> The issue you are seeing on a swact is a new and very recent issue >>>> tied to the decoupling commits that were merged late last week.  Bob >>>> is investigating and I expect he'll have a fix soon for that. >>>> >>>> But the issues we are most concerned with are when we see mariadb >>>> crashing and not able to recover or with openstack services not >>>> working for longer periods of time.  We're attempting to isolate the >>>> sequence of events that trigger this. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Tuesday, June 02, 2020 11:47 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>>              Unable to unlock controller after swact and lock w/ >>>> openstack applied I also tested with daily build 20200516T080009Z. >>>> However, it could not be reproduced. >>>> We should  fix this regression ASAP! >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年6月2日 16:48 >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank and all, >>>> >>>> Update for issue 2. >>>> I raised a new LP to track it. >>>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>>> Below is the time statistics. It seems reasonable. No obvious issue >>>> found. >>>> 1) 3~4min for host restart and get ready. >>>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>>> configmap sync is ready) >>>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>>> a little, as it can retry quickly to connect ovs-vsctl: >>>> unix:/var/run/openvswitch/db.sock) >>>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>>> on ovs-db. ) Any comment? >>>> >>>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>>              Unable to unlock controller after swact and lock w/ >>>> openstack applied >>>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>>              system helm-override-show stx-openstack mariadb >>>> openstack crash  It seems related to openstack plugin decouple >>>> related patches. Should be a regression. >>>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>>> you pls help further check it and your patches, thanks! >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年6月1日 16:20 >>>> To: 'Miller, Frank' ; >>>> 'starlingx-discuss at lists.starlingx.io' >>>> ; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> I also tested the issue 2 with latest daily build on duplex setup. >>>> The conclusion is that the issue is there all the time. >>>> This issue might not be fixed soon, but should not block OpenStack >>>> upgrade, right? >>>> >>>> For 9 OpenStack patches below, I have removed all workflow-1, except >>>> the first patch and add depends-on all them. >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>>> Your review and comments are welcome! >>>> >>>> As for issue 2, some detail info FYI. >>>> It also needs to wait for around 10 min before all pods are ready >>>> again after reboot for master build. >>>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>>> my OpenStack upgrade engineering build. >>>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>>> openvswitch-db) >>>>       openvswitch-db-8fxkw >>>> Related key logs below. >>>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>>> failed to sync secret cache: timed out waiting for the condition >>>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>>> sync configmap cache: timed out waiting for the condition >>>>    Warning  FailedMount  105s               kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>>> failed to sync secret cache: timed out waiting for the condition >>>>    Warning  FailedMount  105s               kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>>> sync configmap cache: timed out waiting for the condition >>>>    Warning  Unhealthy    30s                kubelet, controller-1 >>>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>>> database connection failed (Permission denied) >>>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>>> Readiness probe failed: ovs-vsctl: >>>> unix:/var/run/openvswitch/db.sock: database connection failed >>>> (Permission denied) >>>> >>>> Is it the same stability issue as the one reported from your test >>>> team?  I can only see this issue after force rebooting. What is our >>>> expected recovery time? >>>> Your comment is appreciated! >>>> >>>> Thanks! >>>> Zhipeng >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年5月29日 9:42 >>>> To: 'Miller, Frank' ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> Glad to see your quick reply!! >>>> For OpenStack upgrade task, we have finished all test and get >>>> patches ready for more than 2 weeks, but no any review comments and >>>> feedback from your side.  What's the next step? >>>> >>>> For issue # 2,  in community meeting notes,  I saw that you had some >>>> stability issue from WR local test team. But so far, I do not see >>>> any LP for the detail info. You should ask them to do that!  Right? >>>> >>>> According to your concern, I tried to reproduce it with my build >>>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>>> issue [1] was not seen any more, mariadb got ready quickly, no >>>> regression. >>>> >>>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年5月29日 1:07 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Thanks Zhipeng. >>>> >>>> Good to see progress on IPv6. >>>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>>> there a LP open on this issue?  Which pods are not ready? What can >>>> you tell us about this 10 minute outage? >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Thursday, May 28, 2020 5:06 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> Nicolae already added test case description. Thanks Nicolae! >>>> >>>> I also did below test on AIO-DX virtual setup, exactly according to >>>> your mentioned steps. >>>> No issue found, but just need to wait for around 10 min before all >>>> pods are ready again after reboot. >>>> >>>> For ipv6 issue, I have submitted new patch for it since dynamic >>>> override for database config did not work. >>>>   https://review.opendev.org/#/c/731461/ >>>>   https://review.opendev.org/#/c/731470/ >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年5月27日 22:43 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> Thanks for the info.  You have provided the # of testcases but not >>>> what those testcase do.  Where can I find a description of what the >>>> OpenStack testcases do? >>>> >>>> For the controller reset testcases I'd like to see the test result >>>> for the following: >>>> Is openstack usable during the following scenarios on AIO-DX and on >>>> Standard configurations: >>>> - Lock/unlock of standby controller >>>> - reset (ie: reboot -f) of the standby controller >>>> - reset (ie: reboot -f) of the active controller >>>> - reapply of stx-openstack after the above scenarios >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Wednesday, May 27, 2020 9:15 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> We have done below tests. >>>> 1) Sanity tests by Nicolae. >>>> AIO - Simplex >>>> Setup                                    04 TCs [PASS] >>>> Provisioning                       01 TCs [PASS] >>>> Sanity OpenStack             49 TCs [PASS] >>>> Sanity Platform                 07 TCs [PASS] >>>> >>>> TOTAL: [ 61 TCs ] >>>> >>>> AIO - Duplex >>>> Setup                                    04 TCs [PASS] >>>> Provisioning                       01 TCs [PASS] >>>> Sanity OpenStack             52 TCs [PASS] >>>> Sanity Platform                 07 TCs [PASS] >>>> >>>> TOTAL: [ 64 TCs ] >>>> >>>> Standard - Local Storage (2+2) >>>> Setup                                    04 TCs [PASS] >>>> Provisioning                       01 TCs [PASS] >>>> Sanity OpenStack             52 TCs [PASS] >>>> Sanity Platform                 08 TCs [PASS] >>>> >>>> TOTAL: [ 65 TCs ] >>>> >>>> Standard External - Dedicated Storage (2+2+2) >>>> Setup                                    04 TCs [PASS] >>>> Provisioning                       01 TCs [PASS] >>>> Sanity OpenStack             52 TCs [PASS] >>>> Sanity Platform                 09 TCs [PASS] >>>> >>>> TOTAL: [ 66 TCs ] >>>> >>>> 2) NFV scenario test by me >>>>      on duplex/multi standard virtual setup >>>>            duplex bare metal setup >>>> ===== Setup >>>> ================================================================================================================================= >>>> >>>> 2020-05-14 02:30:05.524  Create flavor small >>>> ........................................ [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>>> .............................. [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor small_swap >>>> ................................... [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>>> ......................... [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium >>>> ....................................... [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>>> ............................. [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>>> .................................. [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>>> ........................ [OKAY] >>>> 2020-05-14 02:30:05.653  Create image cirros >>>> ........................................ [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros >>>> ....................................... [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>>> ............................. [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>>> .................................. [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>>> ........................ [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume empty_volume >>>> ................................. [OKAY] >>>> 2020-05-14 02:30:05.786  Create network internal >>>> .................................... [OKAY] >>>> 2020-05-14 02:30:06.158  Create network external >>>> .................................... [OKAY] >>>> 2020-05-14 02:30:06.772  Create subnet internal >>>> ..................................... [OKAY] >>>> 2020-05-14 02:30:07.661  Create subnet external >>>> ..................................... [OKAY] >>>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>>> ................................... [OKAY] >>>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>>> ......................... [OKAY] >>>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>>> .............................. [OKAY] >>>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>>> .................... [OKAY] >>>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>>> ............................. [OKAY] >>>> 2020-05-14 02:31:21.241  Create instance >>>> cirros-image-with-volumes-1  ................ [OKAY] >>>> ============================================================================================================================================= >>>> >>>> ===== Test Iteration 0 (single-execution) >>>> =================================================================================================== >>>> >>>> 2020-05-14 02:33:04.172  Test Instance-Pause >>>> ........................................ [OKAY]  (2020-05-14 >>>> 02:33:18.078 Δ=0:00:12.870) >>>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>>> ...................................... [OKAY]  (2020-05-14 >>>> 02:33:41.608 Δ=0:00:05.866) >>>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>>> ...................................... [OKAY]  (2020-05-14 >>>> 02:33:59.546 Δ=0:00:05.792) >>>> 2020-05-14 02:34:11.103  Test Instance-Resume >>>> ....................................... [OKAY]  (2020-05-14 >>>> 02:34:17.756 Δ=0:00:05.937) >>>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>>> Δ=0:02:15.748) >>>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>>> Δ=0:00:11.704) >>>> 2020-05-14 02:37:30.673  Test Instance-Stop >>>> ......................................... [OKAY]  (2020-05-14 >>>> 02:38:44.543 Δ=0:01:13.220) >>>> 2020-05-14 02:39:00.481  Test Instance-Start >>>> ........................................ [OKAY]  (2020-05-14 >>>> 02:39:07.198 Δ=0:00:06.068) >>>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>>> Δ=0:00:22.306) >>>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>>> Δ=0:01:24.179) >>>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>>> Δ=0:00:05.884) >>>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>>> Δ=0:00:21.637) >>>> 2020-05-14 02:43:52.320  Test Instance-Resize >>>> ....................................... [OKAY]  (2020-05-14 >>>> 02:45:16.409 Δ=0:01:22.812) >>>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>>> Δ=0:00:05.777) >>>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>>> Δ=0:00:21.748) >>>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>>> ...................................... [OKAY]  (2020-05-14 >>>> 02:48:59.762 Δ=0:01:12.980) >>>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>>> >>>> 3) Another 2 test >>>>      a) Using IPv6 >>>>           It can pass with workaround now.  I need one more fix for it. >>>>           In my previous patch https://review.opendev.org/#/c/716524 >>>> (merged), I dynamically override below >>>>              config_override: | >>>>                  [mysqld] >>>>                  bind_address=:: >>>>           However, it did not work now. From log,  it shows error >>>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>>> line: 1'" >>>>           I tried many methods, but could not remove the first line >>>> in 20-override.cnf >>>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>>> 20-override.cnf >>>>                  |- >>>>                  [mysqld] >>>>                  bind_address=:: >>>>          I can only add it in manifest.yaml as a static override >>>> like below. >>>>                 values: >>>>                    conf: >>>>                        database: >>>>                            config_override: | >>>>                                [mysqld] >>>>                                bind_address=:: >>>>                   b) Reset of controllers and check status of >>>> OpenStack while a controller is rebooting. >>>>           I have tested it and pass on simplex. >>>>           For duplex, I have a setup issue in my side. >>>>           @Jascanu, Nicolae  Could you help me on it for duplex >>>> test, if you have time today. Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年5月26日 21:13 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> Can you publish the list of tests that have been run for openstack? >>>> >>>> Also has openstack been tested for the following scenarios: >>>> 1) Using IPv6 >>>> 2) Reset of controllers and check status of openstack while a >>>> controller is rebooting? >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Monday, May 25, 2020 3:14 AM >>>> To: starlingx-discuss at lists.starlingx.io >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi all, >>>> >>>> We have passed all sanity test on all setup. Thanks Nicolae!! >>>> We also built out OpenStack service images from layered build >>>> environment. >>>> >>>> Please help to review and push below patches to be merged, thanks! >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>>> >>>> BRs >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年5月14日 16:49 >>>> To: 'Saul Wold' ; >>>> 'starlingx-discuss at lists.starlingx.io' >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi all, >>>> >>>> Call for patch review again! >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年5月9日 8:38 >>>> To: Saul Wold ; >>>> starlingx-discuss at lists.starlingx.io >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Agree! >>>> >>>> -----Original Message----- >>>> From: Saul Wold >>>> Sent: 2020年5月9日 0:29 >>>> To: starlingx-discuss at lists.starlingx.io >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> I would strengthen that to no changes until we get Green Sanity >>>> other than what's required to make them Green. >>>> >>>> Full Stop! >>>> >>>> Sau! >>>> >>>> >>>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>>> Until we can get sanity passing for several days in a row I strongly >>>>> suggest we do not allow any further changes into the load related to >>>>> OpenStack.  Folks can continue with reviews but let’s hold off >>>>> allowing merges related to a new OpenStack version. >>>>> >>>>> Frank >>>>> >>>>> *From:*Liu, ZhipengS >>>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>>> *To:* starlingx-discuss >>>>> *Cc:* YU CHENGDE ; Penney, Don >>>>> >>>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>>> for patch review!! >>>>> >>>>> Hi all, >>>>> >>>>> Please help to review OpenStack Ussuri upgrade patches. >>>>> >>>>> Our target is to get all below patches merged by end of next week. >>>>> >>>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>>>> :merged) >>>>> >>>>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>>>> python3.6 for OpenStack services as ussuri release only support >>>>> python3. >>>>> >>>>> We also rebased openstack-helm/helm-infra to latest version. >>>>> >>>>> Engineering build test status. >>>>> >>>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>>> PASS. >>>>> >>>>> Thanks! >>>>> >>>>> Zhipeng >>>>> >>>>> >>>>> _______________________________________________ >>>>> Starlingx-discuss mailing list >>>>> Starlingx-discuss at lists.starlingx.io >>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From taimoor.imtiaz at intel.com Wed Jun 10 21:52:43 2020 From: taimoor.imtiaz at intel.com (Imtiaz, Taimoor) Date: Wed, 10 Jun 2020 21:52:43 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <20200610201440.t2c334s43qtwpsqk@yuggoth.org> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> <9b55496eddfb47178a8023ba59e6ccd5@intel.com> <20200610201440.t2c334s43qtwpsqk@yuggoth.org> Message-ID: Hi Jeremy, I didn't mean to ascribe age to communities if that is what it came across as. I meant to say that Brendan Berns (as a steward) is different from Torvalds* and as you said it definitely is a matter of taste. I just think that many newer communities are using these tools. > Have any details on this? Popular Web search engines already crawl and index our list archives, and turn up relevant results from them. I do not actually. I think I meant to say that search engine functionality is built-in. *Of course my impression comes from news sources. Best, Taimoor -----Original Message----- From: Jeremy Stanley Sent: Wednesday, June 10, 2020 22:15 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Adoption discussions in PTG On 2020-06-10 18:50:05 +0000 (+0000), Imtiaz, Taimoor wrote: > Sure, I do not disagree that mailing lists are functional. > Discourse is so much more welcoming and information is easy to > discover. "Welcoming" and "easy to discover" are matters of personal taste, and so differ widely based on individual experience. De gustibus non est disputandum. > If you monitor a community forum such as Kubernetes', you'll see > people having fun too (showing off projects etc.). It's also a bit > more realtime and meant for threaded discussions. E-mail and thus mailing lists are also explicitly designed for threaded discussions, unless you've decided to cripple your communications by using a terrible mail client. My client shows me thread trees of list messages just fine. > It was just a suggestion on my end. I do not think we should compare > cloud native communities with Linux. The stewards are different > generations of folks and mindset are totally different. I hesitate to ascribe ageist generalizations to communication tooling preferences. Are you suggesting that the Linux kernel doesn't have younger developers? Or that Kubernetes doesn't have older developers? What is specific to the Linux maintainer "mindset" which differentiates it from the Kubernetes maintainer "mindset" in this regard? > In my observation, most people do not go through the hassle of > registering on mailing lists. They do however like browsing forums (I > know I do).. I have no problem subscribing to mailing lists, in fact I'm subscribed to many. I much prefer getting messages in my inbox and not having to go check a dozen different Web sites to read new forum posts for discussions in which I'm involved/interested. To be honest, I'd rather not start up a Web browser at all when I can help it. > SEO tooling also likes it 😊 [...] Have any details on this? Popular Web search engines already crawl and index our list archives, and turn up relevant results from them. -- Jeremy Stanley Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 From zhipengs.liu at intel.com Thu Jun 11 01:43:07 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 11 Jun 2020 01:43:07 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE    >>> starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk?  >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning                       >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning                       >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning                       >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup                                    >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform                 >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From haochuan.z.chen at intel.com Thu Jun 11 02:53:06 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Thu, 11 Jun 2020 02:53:06 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: , Message-ID: Hi voiculeasa I confirm backup and restore works without ceph backend. This issue is caused with my improper provision step. BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan Sent: Tuesday, June 9, 2020 5:54 PM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello Martin, I didn't encounter that issue when testing, but also I didn't test recently without ceph backend. Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph? Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Monday, June 8, 2020 5:21 AM To: Voiculeasa, Dan >; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.chengde at 99cloud.net Thu Jun 11 03:21:39 2020 From: yu.chengde at 99cloud.net (YuChengDe) Date: Thu, 11 Jun 2020 11:21:39 +0800 (GMT+08:00) Subject: [Starlingx-discuss] =?utf-8?q?=5Bpytest=5D_Please_teach_me_how_to?= =?utf-8?q?_use_pytest_on_stx-openstack?= Message-ID: Hello: I am going to testing our stx-openstack through starlingx/test https://opendev.org/starlingx/test/src/branch/r/stx.3.0 May I ask for some tutorial and testing example? Many thanks. -- ————————————————————————————— 九州云信息科技有限公司 99CLOUD Inc. 于成德 产品开发部 邮箱(Email): yu.chengde at 99cloud.net 手机(Mobile): 13816965096 地址(Addr): 上海市局门路427号1号楼206 Room 206, Bldg 1, No.427 JuMen Road, ShangHai, China 网址(Site): http://www.99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.chengde at 99cloud.net Thu Jun 11 03:22:11 2020 From: yu.chengde at 99cloud.net (YuChengDe) Date: Thu, 11 Jun 2020 11:22:11 +0800 (GMT+08:00) Subject: [Starlingx-discuss] =?utf-8?q?=5Bpytest=5D_Please_teach_me_how_to?= =?utf-8?q?_use_pytest_on_stx-openstack?= Message-ID: Hello: I am going to testing our stx-openstack through starlingx/test https://opendev.org/starlingx/test/src/branch/r/stx.3.0 May I ask for some tutorial and testing example? Many thanks. -- ————————————————————————————— 九州云信息科技有限公司 99CLOUD Inc. 于成德 产品开发部 邮箱(Email): yu.chengde at 99cloud.net 手机(Mobile): 13816965096 地址(Addr): 上海市局门路427号1号楼206 Room 206, Bldg 1, No.427 JuMen Road, ShangHai, China 网址(Site): http://www.99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Jun 11 09:06:48 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 11 Jun 2020 11:06:48 +0200 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <9b55496eddfb47178a8023ba59e6ccd5@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> <9b55496eddfb47178a8023ba59e6ccd5@intel.com> Message-ID: <29D88630-EFDD-40BD-AA7A-F30761359F77@gmail.com> […] > I agree with the blog post idea and I'll try to get some users to write those. […] Sounds great! Please let me know if you need any help throughout the process. Thanks, Ildikó From maryx.camp at intel.com Thu Jun 11 14:38:47 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 11 Jun 2020 14:38:47 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 2020-06-10 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Our tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 2020-06-10  . All -- reviews merged since last meeting:  1 . All -- bug status -- 5 total, 2 WIP o [ww23] Fix search function & Add instructions for building stx-openstack application [not started] o [ww20] Networking documentation [not started] o [ww17] Debug guide [WIP]  o [ww16] Build Avoidance [WIP] https://docs.starlingx.io/developer_resources/build_guide.html#build-avoidance) . Reviews in progress:    o Chinese document for layered build https://review.opendev.org/#/c/726737/  o TSN in Kata containers - [WIP] Mary's clerical edits. o Rook migration - Martin Chen author - orig review is merged. AR Mary to do clerical edits.  o Modifying layered build commands (add pike / remove pike)  This review is valid for the current situation: https://review.opendev.org/#/c/717424/  . All -- Opens o Bart explained the reviews from Andreas Jaeger which updated the openstackdocs theme: https://review.opendev.org/#/c/733576/ and https://review.opendev.org/#/c/733566/  The "submit a bug link" on the doc pages points to LP now, hooray! o Greg sent an email suggestion to provide an alternate method for accessing openstack with the local CLI. AR Mary update https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#local-cli o Poornima joined to ask for reviewers on the Layered Build guide: https://review.opendev.org/#/c/733048/9 From ildiko.vancsa at gmail.com Thu Jun 11 15:37:40 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 11 Jun 2020 17:37:40 +0200 Subject: [Starlingx-discuss] StarlingX is a new confirmed Open Infrastructure project!! Message-ID: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> Hi StarlingX Community, I’m reaching out to you to share the good news that the Board of Directors of the OpenStack Foundation has just approved to confirm StarlingX as a new top-level Open Infrastructure project supported by OSF[1]. Hereby I would like to congratulate to the community for all the hard work and achievements during the pilot phase and looking forward to continue working with you to shape both the community and the platform to achieve further successes! I would also like to thank Ian and Saul who took on the task to present to the Board today, they did an amazing job to talk about the first two years of the project. Thanks, Ildikó [1] https://www.openstack.org/news/view/454/starlingx-confirmed-as-toplevel-osf-project From glenn.seiler at windriver.com Thu Jun 11 16:00:11 2020 From: glenn.seiler at windriver.com (Seiler, Glenn) Date: Thu, 11 Jun 2020 16:00:11 +0000 Subject: [Starlingx-discuss] StarlingX is a new confirmed Open Infrastructure project!! In-Reply-To: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> References: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> Message-ID: Congratulations to everyone who has participated in this great project over the past two years. This is a fantastic achievement. I know StarlingX is going to continue to prosper and grow. -glenn ________________________________ From: Ildiko Vancsa Sent: Thursday, June 11, 2020 8:37:40 AM To: starlingx Subject: [Starlingx-discuss] StarlingX is a new confirmed Open Infrastructure project!! Hi StarlingX Community, I’m reaching out to you to share the good news that the Board of Directors of the OpenStack Foundation has just approved to confirm StarlingX as a new top-level Open Infrastructure project supported by OSF[1]. Hereby I would like to congratulate to the community for all the hard work and achievements during the pilot phase and looking forward to continue working with you to shape both the community and the platform to achieve further successes! I would also like to thank Ian and Saul who took on the task to present to the Board today, they did an amazing job to talk about the first two years of the project. Thanks, Ildikó [1] https://www.openstack.org/news/view/454/starlingx-confirmed-as-toplevel-osf-project _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.a.cobbley at intel.com Thu Jun 11 16:06:28 2020 From: david.a.cobbley at intel.com (Cobbley, David A) Date: Thu, 11 Jun 2020 16:06:28 +0000 Subject: [Starlingx-discuss] StarlingX is a new confirmed Open Infrastructure project!! In-Reply-To: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> References: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> Message-ID: <95CAB4F4-F045-4E0B-B5FE-912EB2AD4C75@intel.com> This is wonderful news, and from where the project started, was not easy to achieve. It is a testament to the passion and dedication of the team that the project has reached this level and overcome several challenges along the way. Congratulations! --David Cobbley On 6/11/20, 8:39 AM, "Ildiko Vancsa" wrote: Hi StarlingX Community, I’m reaching out to you to share the good news that the Board of Directors of the OpenStack Foundation has just approved to confirm StarlingX as a new top-level Open Infrastructure project supported by OSF[1]. Hereby I would like to congratulate to the community for all the hard work and achievements during the pilot phase and looking forward to continue working with you to shape both the community and the platform to achieve further successes! I would also like to thank Ian and Saul who took on the task to present to the Board today, they did an amazing job to talk about the first two years of the project. Thanks, Ildikó [1] https://www.openstack.org/news/view/454/starlingx-confirmed-as-toplevel-osf-project _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Thu Jun 11 17:24:41 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 13:24:41 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 225 - Failure! Message-ID: <1122549494.1633.1591896282389.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 225 Status: Failure Timestamp: 20200611T172331Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200611T142734Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs MASTER_BUILD_NUMBER: 2 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200611T142734Z DOCKER_BUILD_ID: jenkins-ussuri-20200611T142734Z-builder TIMESTAMP: 20200611T142734Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs From build.starlingx at gmail.com Thu Jun 11 17:24:43 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 13:24:43 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 2 - Failure! Message-ID: <448997430.1636.1591896284713.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 2 Status: Failure Timestamp: 20200611T142734Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From sgw at linux.intel.com Thu Jun 11 17:53:33 2020 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 11 Jun 2020 10:53:33 -0700 Subject: [Starlingx-discuss] Ussuri Test build failed Message-ID: Zhipeng, Looks like there is a missing dependency issue with the Ussuri build see the logs [0]. Summary of Errors: Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libaprutil-1.so.0()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: system-logos >= 7.92.1-1 Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libaprutil-1.so.0()(64bit) Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) Requires: scl-utils Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libjansson.so.4()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libapr-1.so.0()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libapr-1.so.0()(64bit) Error: Package: rh-python36-runtime-2.0-1.el7.x86_64 (ussuri-wsgi) Requires: scl-utils Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: /etc/mime.types Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Please take a look into this. Thanks Sau! [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_base_image-238.log.html From maryx.camp at intel.com Thu Jun 11 18:13:06 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 11 Jun 2020 18:13:06 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: , Message-ID: Hi Martin and Dan, The Backup and restore guide review has just merged in the StarlingX documentation. Please have a look at the guide here: https://docs.starlingx.io/developer_resources/backup_restore.html If I can fix the guide to be more clear and prevent errors, please open a Launchpad by clicking the "bug" button or submit a review with changes. Thanks in advance for your feedback to improve the STX documentation, Mary Camp PTIGlobal Technical Writer | maryx.camp at intel.com From: Chen, Haochuan Z Sent: Wednesday, June 10, 2020 10:53 PM To: Voiculeasa, Dan ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa I confirm backup and restore works without ceph backend. This issue is caused with my improper provision step. BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 9, 2020 5:54 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello Martin, I didn't encounter that issue when testing, but also I didn't test recently without ceph backend. Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph? Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Monday, June 8, 2020 5:21 AM To: Voiculeasa, Dan >; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Thu Jun 11 18:43:10 2020 From: yang.liu at windriver.com (Liu, Yang (YOW)) Date: Thu, 11 Jun 2020 18:43:10 +0000 Subject: [Starlingx-discuss] [pytest] Please teach me how to use pytest on stx-openstack In-Reply-To: References: Message-ID: Hi Chengde, You can start with the training video in following share drive: https://drive.google.com/drive/folders/1AvUCq3ojuhNZV6XE8YdRhp9PVxixRIeE Cheers, Yang From: YuChengDe [mailto:yu.chengde at 99cloud.net] Sent: June-10-20 11:22 PM To: starlingx-discuss at lists.starlingx.io; Liu, Yang (YOW) Subject: [pytest] Please teach me how to use pytest on stx-openstack Hello: I am going to testing our stx-openstack through starlingx/test https://opendev.org/starlingx/test/src/branch/r/stx.3.0 May I ask for some tutorial and testing example? Many thanks. [http://mailhz.qiye.163.com/qiyeimage/logo/60511048/1576638602260.jpg] -- ————————————————————————————— 九州云信息科技有限公司 99CLOUD Inc. 于成德 产品开发部 邮箱(Email): yu.chengde at 99cloud.net 手机(Mobile): 13816965096 地址(Addr): 上海市局门路427号1号楼206 Room 206, Bldg 1, No.427 JuMen Road, ShangHai, China 网址(Site): http://www.99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Thu Jun 11 19:31:52 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Thu, 11 Jun 2020 19:31:52 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200611T021306Z Message-ID: Sanity Test from 2020-June-11 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200611T021306Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200611T021306Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Jun 11 22:35:10 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 12 Jun 2020 00:35:10 +0200 Subject: [Starlingx-discuss] StarlingX PTG overview blog post Message-ID: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> Hi, As discussed on the community call this week I typed up a summary[1] about the StarlingX PTG sessions. It is a draft version, but I wanted to share for feedback so that we can put it up on the blog early next week. As a general objective for the blog I kept it relatively high level with pointers where I had with further details so it is easy to read and those who are interested can follow up on specific items. Please keep this in mind when you review the text. Please leave comments or fixes in the etherpad __by the end of day Monday (June 15)__. Please let me know if you have any questions. Thanks, Ildikó [1] https://etherpad.opendev.org/p/stx-virtual-ptg-blog-june-2020 From bruce.e.jones at intel.com Thu Jun 11 23:06:30 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 11 Jun 2020 23:06:30 +0000 Subject: [Starlingx-discuss] StarlingX PTG overview blog post In-Reply-To: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> References: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> Message-ID: Wow, that looks amazingly good Ildiko, especially considering the time of day in your time zone when you were attending the PTG. Thank you! brucej -----Original Message----- From: Ildiko Vancsa Sent: Thursday, June 11, 2020 3:35 PM To: starlingx Subject: [Starlingx-discuss] StarlingX PTG overview blog post Hi, As discussed on the community call this week I typed up a summary[1] about the StarlingX PTG sessions. It is a draft version, but I wanted to share for feedback so that we can put it up on the blog early next week. As a general objective for the blog I kept it relatively high level with pointers where I had with further details so it is easy to read and those who are interested can follow up on specific items. Please keep this in mind when you review the text. Please leave comments or fixes in the etherpad __by the end of day Monday (June 15)__. Please let me know if you have any questions. Thanks, Ildikó [1] https://etherpad.opendev.org/p/stx-virtual-ptg-blog-june-2020 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Thu Jun 11 23:09:50 2020 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 11 Jun 2020 16:09:50 -0700 Subject: [Starlingx-discuss] StarlingX PTG overview blog post In-Reply-To: References: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> Message-ID: <477904d7-fbdc-76de-c63b-7b4bd8c642da@linux.intel.com> It's a great start, I tweaked on item in multios section, I think there might be some re-ordering of the paragaphs just to move some of the adoption/community stuff first and maybe the 4.0 and 5x planning second. I did not want to make those changes directly, but can. Sau! On 6/11/20 4:06 PM, Jones, Bruce E wrote: > Wow, that looks amazingly good Ildiko, especially considering the time of day in your time zone when you were attending the PTG. Thank you! > > brucej > > -----Original Message----- > From: Ildiko Vancsa > Sent: Thursday, June 11, 2020 3:35 PM > To: starlingx > Subject: [Starlingx-discuss] StarlingX PTG overview blog post > > Hi, > > As discussed on the community call this week I typed up a summary[1] about the StarlingX PTG sessions. > > It is a draft version, but I wanted to share for feedback so that we can put it up on the blog early next week. As a general objective for the blog I kept it relatively high level with pointers where I had with further details so it is easy to read and those who are interested can follow up on specific items. Please keep this in mind when you review the text. > > Please leave comments or fixes in the etherpad __by the end of day Monday (June 15)__. > > Please let me know if you have any questions. > > Thanks, > Ildikó > > [1] https://etherpad.opendev.org/p/stx-virtual-ptg-blog-june-2020 > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From build.starlingx at gmail.com Fri Jun 12 00:31:52 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 20:31:52 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 226 - Still Failing! In-Reply-To: <1399540446.1631.1591896280262.JavaMail.javamailuser@localhost> References: <1399540446.1631.1591896280262.JavaMail.javamailuser@localhost> Message-ID: <335343019.1641.1591921913022.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 226 Status: Still Failing Timestamp: 20200611T174357Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200611T142734Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs MASTER_BUILD_NUMBER: 2 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200611T142734Z DOCKER_BUILD_ID: jenkins-ussuri-20200611T142734Z-builder TIMESTAMP: 20200611T142734Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs From build.starlingx at gmail.com Fri Jun 12 03:01:49 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 23:01:49 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 421 - Failure! Message-ID: <1177108974.1644.1591930911993.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 421 Status: Failure Timestamp: 20200612T014342Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200612T013217Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20200612T013217Z DOCKER_BUILD_ID: jenkins-master-distro-20200612T013217Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200612T013217Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20200612T013217Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro BUILD_ISO: false From build.starlingx at gmail.com Fri Jun 12 03:01:53 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 23:01:53 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_distro_master_master - Build # 148 - Failure! Message-ID: <1883025053.1647.1591930916294.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 148 Status: Failure Timestamp: 20200612T013217Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200612T013217Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From zhipengs.liu at intel.com Fri Jun 12 04:05:44 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 12 Jun 2020 04:05:44 +0000 Subject: [Starlingx-discuss] Ussuri Test build failed In-Reply-To: References: Message-ID: Hi Scott, Root cause found! Please help double check your cengn script. In the log, I saw you added 4 repos exactly. But build-stx-bash.sh run just with 2 repos as you see in below log. I guess you need change "EXTRA_ARGS=" to "EXTRA_ARGS+=" + EXTRA_ARGS=' --repo stx-local-build,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs/RPMS/std --repo stx-mirror-distro,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs/RPMS ' + '[' ussuri-stable-latest == ussuri-stable-latest ']' + EXTRA_ARGS=' --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ ' + /localdisk/designer/jenkins/ussuri/cgcs-root/build-tools/build-docker-images/build-stx-base.sh --os centos --os-version 7.5.1804 --stream stable --version ussuri-stable --user starlingx --registry docker.io --attempts 5 --push --latest --latest-tag=ussuri-stable-latest --clean --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ Thanks! Zhipeng -----Original Message----- From: Saul Wold Sent: 2020年6月12日 1:54 To: starlingx-discuss at lists.starlingx.io; Hu, Yong ; Liu, ZhipengS Subject: Ussuri Test build failed Zhipeng, Looks like there is a missing dependency issue with the Ussuri build see the logs [0]. Summary of Errors: Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libaprutil-1.so.0()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: system-logos >= 7.92.1-1 Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libaprutil-1.so.0()(64bit) Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) Requires: scl-utils Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libjansson.so.4()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libapr-1.so.0()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libapr-1.so.0()(64bit) Error: Package: rh-python36-runtime-2.0-1.el7.x86_64 (ussuri-wsgi) Requires: scl-utils Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: /etc/mime.types Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Please take a look into this. Thanks Sau! [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_base_image-238.log.html From build.starlingx at gmail.com Fri Jun 12 08:03:13 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 12 Jun 2020 04:03:13 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 3 - Still Failing! In-Reply-To: <729016722.1634.1591896282936.JavaMail.javamailuser@localhost> References: <729016722.1634.1591896282936.JavaMail.javamailuser@localhost> Message-ID: <2050768555.1651.1591948993675.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 3 Status: Still Failing Timestamp: 20200612T080012Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200612T080012Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From ildiko.vancsa at gmail.com Fri Jun 12 09:04:32 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 12 Jun 2020 11:04:32 +0200 Subject: [Starlingx-discuss] StarlingX PTG overview blog post In-Reply-To: <477904d7-fbdc-76de-c63b-7b4bd8c642da@linux.intel.com> References: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> <477904d7-fbdc-76de-c63b-7b4bd8c642da@linux.intel.com> Message-ID: Thanks for the quick review and suggestions! @Saul: I thought to start with the technical items and then transition to community and cross-project, but I’m easy on the order. I kind of lost track of what we talked about when exactly so I gave up on chronological order pretty quick. :) I’m fine with you making the change or if you’re more comfortable with me moving things around I can do it based on your preference. Let me know. Thanks, Ildikó > On Jun 12, 2020, at 01:09, Saul Wold wrote: > > It's a great start, I tweaked on item in multios section, I think there might be some re-ordering of the paragaphs just to move some of the adoption/community stuff first and maybe the 4.0 and 5x planning second. > > I did not want to make those changes directly, but can. > > Sau! > > > On 6/11/20 4:06 PM, Jones, Bruce E wrote: >> Wow, that looks amazingly good Ildiko, especially considering the time of day in your time zone when you were attending the PTG. Thank you! >> brucej >> -----Original Message----- >> From: Ildiko Vancsa >> Sent: Thursday, June 11, 2020 3:35 PM >> To: starlingx >> Subject: [Starlingx-discuss] StarlingX PTG overview blog post >> Hi, >> As discussed on the community call this week I typed up a summary[1] about the StarlingX PTG sessions. >> It is a draft version, but I wanted to share for feedback so that we can put it up on the blog early next week. As a general objective for the blog I kept it relatively high level with pointers where I had with further details so it is easy to read and those who are interested can follow up on specific items. Please keep this in mind when you review the text. >> Please leave comments or fixes in the etherpad __by the end of day Monday (June 15)__. >> Please let me know if you have any questions. >> Thanks, >> Ildikó >> [1] https://etherpad.opendev.org/p/stx-virtual-ptg-blog-june-2020 >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yatindra.shashi at intel.com Fri Jun 12 10:13:33 2020 From: yatindra.shashi at intel.com (Shashi, Yatindra) Date: Fri, 12 Jun 2020 10:13:33 +0000 Subject: [Starlingx-discuss] Unable to log in controller-1 after changing password on active controller-0 Message-ID: Hi All, In AIO-Duplex Setup 3.0 As after certain days Stx force user to change the Password, I changed the password in the Controller-0 but I did not do on the cont-1. I had locked/unlocked Cont-1 and tried to login with old/new password but I get access denied. Is there way to reset or change sysadmin Password of cont-1. I am able to login to dashboard and cont-0 with the password I had. Mit freundlichen Grüßen/ with best regards, Yatindra Shashi IoTG DE- Intel Corporation Munich, Germany P Save Paper, Go Digital :) Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Jun 12 16:17:28 2020 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 12 Jun 2020 09:17:28 -0700 Subject: [Starlingx-discuss] Ussuri Test build failed In-Reply-To: References: Message-ID: Turns out that Scott might have found that and fixed it in a second build that happened yesterday afternoon, but the Failure notification does not seem to have been sent. The failed build logs [0] still seem to show a variety of missing dependencies. There might also be another merge conflict. There were 10 failures: stx-cinder> ERROR: Could not find a version that satisfies the requirement google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) > ERROR: No matching distribution found for google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) stx-fm-rest-api > ERROR: Could not find a version that satisfies the requirement pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) (from versions: 0.6.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.9.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, 1.2, 1.2.1, 1.3.1, 1.3.2) > ERROR: No matching distribution found for pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21))stx-glance > ERROR: Could not find a version that satisfies the requirement networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, 2.4rc2, 2.4) > ERROR: No matching distribution found for networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101))stx-gnocchi > ERROR: Could not find a version that satisfies the requirement uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) (from versions: none) > ERROR: No matching distribution found for uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) stx-heat > ERROR: Could not find a version that satisfies the requirement networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, 2.4rc2, 2.4) > ERROR: No matching distribution found for networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101))stx-keystone-api-proxy > ERROR: Could not find a version that satisfies the requirement pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) (from versions: 0.6.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.9.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, 1.2, 1.2.1, 1.3.1, 1.3.2) > ERROR: No matching distribution found for pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) stx-nova > ERROR: Could not find a version that satisfies the requirement networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, 2.4rc2, 2.4) > ERROR: No matching distribution found for networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) stx-nova-api-proxy > ERROR: Could not find a version that satisfies the requirement repoze.lru===0.7 (from -c /tmp/wheels/upper-constraints.txt (line 571)) (from versions: none) > ERROR: No matching distribution found for repoze.lru===0.7 (from -c /tmp/wheels/upper-constraints.txt (line 571)) stx-openstackclients > ERROR: Could not find a version that satisfies the requirement prettytable===0.7.2 (from -c /tmp/wheels/upper-constraints.txt (line 159)) (from versions: none) > ERROR: No matching distribution found for prettytable===0.7.2 (from -c /tmp/wheels/upper-constraints.txt (line 159)) stx-platformclients> ERROR: Could not find a version that satisfies the requirement prettytable===0.7.2 (from -c /tmp/wheels/upper-constraints.txt (line 159)) (from versions: none) > ERROR: No matching distribution found for prettytable===0.7.2 (from -c /tmp/wheels/upper-constraints.txt (line 159)) Sau! [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_flock_images-200.log.html On 6/11/20 9:05 PM, Liu, ZhipengS wrote: > Hi Scott, > > Root cause found! > Please help double check your cengn script. > In the log, I saw you added 4 repos exactly. > But build-stx-bash.sh run just with 2 repos as you see in below log. > I guess you need change "EXTRA_ARGS=" to "EXTRA_ARGS+=" > > + EXTRA_ARGS=' > --repo stx-local-build,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs/RPMS/std --repo stx-mirror-distro,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs/RPMS ' > + '[' ussuri-stable-latest == ussuri-stable-latest ']' > + EXTRA_ARGS=' > --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ > ' > + /localdisk/designer/jenkins/ussuri/cgcs-root/build-tools/build-docker-images/build-stx-base.sh --os centos --os-version 7.5.1804 --stream stable --version ussuri-stable --user starlingx --registry docker.io --attempts 5 --push --latest --latest-tag=ussuri-stable-latest --clean --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ > > Thanks! > Zhipeng > > -----Original Message----- > From: Saul Wold > Sent: 2020年6月12日 1:54 > To: starlingx-discuss at lists.starlingx.io; Hu, Yong ; Liu, ZhipengS > Subject: Ussuri Test build failed > > > Zhipeng, > > Looks like there is a missing dependency issue with the Ussuri build see the logs [0]. > > Summary of Errors: > > Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libaprutil-1.so.0()(64bit) > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: system-logos >= 7.92.1-1 > Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils-python > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libaprutil-1.so.0()(64bit) > Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils > Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) > Requires: scl-utils > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libjansson.so.4()(64bit) > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libapr-1.so.0()(64bit) > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils > Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libapr-1.so.0()(64bit) > Error: Package: rh-python36-runtime-2.0-1.el7.x86_64 (ussuri-wsgi) > Requires: scl-utils > Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils-python > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: /etc/mime.types > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils-python > > Please take a look into this. > > Thanks > > Sau! > > > > [0] > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_base_image-238.log.html > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From sgw at linux.intel.com Fri Jun 12 20:15:34 2020 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 12 Jun 2020 13:15:34 -0700 Subject: [Starlingx-discuss] Recent download_mirror issue developer vs cengn Message-ID: <40507708-c6f8-8a54-c938-129b9884de11@linux.intel.com> Folks (particularly Scott and Davlet): There was a recent issues with a developer in India (Poornima) doing the CVE updates for expat and gettext. When she used the download_mirrors script with the default settings (which is appropriate), it used the /etc/yum.repos.d setup of repo lists. The CentOS-Base.repo points to a set of mirrorlists, not to the mirror.centos.org directly. So when yumdownloader started looking it found the local to India mirrors and the Centos7 repo which was updated with the newer versions. From the logs on her build machine I can see this: url_srpm:http://centos.mirrors.estointernet.in/7.8.2003/os/x86_64/Packages/expat-devel-2.1.0-11.el7.x86_64.rpm Notice it's using 7.8 even though is not explicitly mentioned in default repo lists. So yumdownloader is searching though the mirrorlist and finding the 7.8 repo above and beyond the set we would normally default to. What I am not sure about is how Cengn does the mirroring process, I think it uses download_mirror with the default /etc/yum.repos.d, but I can't be sure. I need Scott and Davlet to weigh here until I can get better access to Jenkins or recreate a Jenkins instance. Sau! From scott.little at windriver.com Fri Jun 12 20:26:46 2020 From: scott.little at windriver.com (Scott Little) Date: Fri, 12 Jun 2020 16:26:46 -0400 Subject: [Starlingx-discuss] Recent download_mirror issue developer vs cengn In-Reply-To: <40507708-c6f8-8a54-c938-129b9884de11@linux.intel.com> References: <40507708-c6f8-8a54-c938-129b9884de11@linux.intel.com> Message-ID: What arguments where use for download_mirror.sh ? Scott On 2020-06-12 4:15 p.m., Saul Wold wrote: > > Folks (particularly Scott and Davlet): > > There was a recent issues with a developer in India (Poornima) doing > the CVE updates for expat and gettext. When she used the > download_mirrors script with the default settings (which is > appropriate), it used the /etc/yum.repos.d setup of repo lists. The > CentOS-Base.repo points to a set of mirrorlists, not to the > mirror.centos.org directly.  So when yumdownloader started looking it > found the local to India mirrors and the Centos7 repo which was > updated with the newer versions. > > From the logs on her build machine I can see this: > url_srpm:http://centos.mirrors.estointernet.in/7.8.2003/os/x86_64/Packages/expat-devel-2.1.0-11.el7.x86_64.rpm > > > Notice it's using 7.8 even though is not explicitly mentioned in > default repo lists. So yumdownloader is searching though the > mirrorlist and finding the 7.8 repo above and beyond the set we would > normally default to. > > What I am not sure about is how Cengn does the mirroring process, I > think it uses download_mirror with the default /etc/yum.repos.d, but I > can't be sure. > > I need Scott and Davlet to weigh here until I can get better access to > Jenkins or recreate a Jenkins instance. > > Sau! From sgw at linux.intel.com Fri Jun 12 20:40:27 2020 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 12 Jun 2020 13:40:27 -0700 Subject: [Starlingx-discuss] Recent download_mirror issue developer vs cengn In-Reply-To: References: <40507708-c6f8-8a54-c938-129b9884de11@linux.intel.com> Message-ID: <0183e491-ffee-ff0f-22ba-e2db69432a41@linux.intel.com> On 6/12/20 1:26 PM, Scott Little wrote: > What arguments where use for download_mirror.sh ? > None, as far as I can tell from the history in the shell for the container. That's why I said "default settings" below, sorry if that was not clear. Sau! > Scott > > > On 2020-06-12 4:15 p.m., Saul Wold wrote: >> >> Folks (particularly Scott and Davlet): >> >> There was a recent issues with a developer in India (Poornima) doing >> the CVE updates for expat and gettext. When she used the >> download_mirrors script with the default settings (which is >> appropriate), it used the /etc/yum.repos.d setup of repo lists. The >> CentOS-Base.repo points to a set of mirrorlists, not to the >> mirror.centos.org directly.  So when yumdownloader started looking it >> found the local to India mirrors and the Centos7 repo which was >> updated with the newer versions. >> >> From the logs on her build machine I can see this: >> url_srpm:http://centos.mirrors.estointernet.in/7.8.2003/os/x86_64/Packages/expat-devel-2.1.0-11.el7.x86_64.rpm >> >> >> Notice it's using 7.8 even though is not explicitly mentioned in >> default repo lists. So yumdownloader is searching though the >> mirrorlist and finding the 7.8 repo above and beyond the set we would >> normally default to. >> >> What I am not sure about is how Cengn does the mirroring process, I >> think it uses download_mirror with the default /etc/yum.repos.d, but I >> can't be sure. >> >> I need Scott and Davlet to weigh here until I can get better access to >> Jenkins or recreate a Jenkins instance. >> >> Sau! From zhipengs.liu at intel.com Sat Jun 13 00:18:46 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Sat, 13 Jun 2020 00:18:46 +0000 Subject: [Starlingx-discuss] Ussuri Test build failed In-Reply-To: References: Message-ID: Hi Scott, From log, we are still using https://raw.githubusercontent.com/openstack/requirements/stable/train/upper-constraints.txt not https://raw.githubusercontent.com/openstack/requirements/stable/ussuri/upper-constraints.txt You might not cherry pick below patch normally. https://review.opendev.org/#/c/712880 Please make sure cherry pick below 2 patches before start building openstack images. https://review.opendev.org/712862 https://review.opendev.org/712880 Thanks! Zhipeng -----Original Message----- From: Saul Wold Sent: 2020年6月13日 0:17 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] Ussuri Test build failed Turns out that Scott might have found that and fixed it in a second build that happened yesterday afternoon, but the Failure notification does not seem to have been sent. The failed build logs [0] still seem to show a variety of missing dependencies. There might also be another merge conflict. There were 10 failures: stx-cinder> ERROR: Could not find a version that satisfies the requirement google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) > ERROR: No matching distribution found for > google-api-python-client===1.7.11 (from -c > /tmp/wheels/upper-constraints.txt (line 303)) stx-fm-rest-api > ERROR: Could not find a version that satisfies the requirement > pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) > (from versions: 0.6.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, > 0.9.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, > 1.2, 1.2.1, 1.3.1, 1.3.2) > ERROR: No matching distribution found for pecan===1.3.3 (from -c > /tmp/wheels/upper-constraints.txt (line 21))stx-glance > ERROR: Could not find a version that satisfies the requirement > networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) > (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, > 2.4rc2, 2.4) > ERROR: No matching distribution found for networkx===2.3 (from -c > /tmp/wheels/upper-constraints.txt (line 101))stx-gnocchi > ERROR: Could not find a version that satisfies the requirement > uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line > 705)) (from versions: none) > ERROR: No matching distribution found for uWSGI===2.0.17.1 (from -c > /tmp/wheels/upper-constraints.txt (line 705)) stx-heat > ERROR: Could not find a version that satisfies the requirement > networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) > (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, > 2.4rc2, 2.4) > ERROR: No matching distribution found for networkx===2.3 (from -c > /tmp/wheels/upper-constraints.txt (line 101))stx-keystone-api-proxy > ERROR: Could not find a version that satisfies the requirement > pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) > (from versions: 0.6.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, > 0.9.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, > 1.2, 1.2.1, 1.3.1, 1.3.2) > ERROR: No matching distribution found for pecan===1.3.3 (from -c > /tmp/wheels/upper-constraints.txt (line 21)) stx-nova > ERROR: Could not find a version that satisfies the requirement > networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) > (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, > 2.4rc2, 2.4) > ERROR: No matching distribution found for networkx===2.3 (from -c > /tmp/wheels/upper-constraints.txt (line 101)) stx-nova-api-proxy > ERROR: Could not find a version that satisfies the requirement > repoze.lru===0.7 (from -c /tmp/wheels/upper-constraints.txt (line > 571)) (from versions: none) > ERROR: No matching distribution found for repoze.lru===0.7 (from -c > /tmp/wheels/upper-constraints.txt (line 571)) stx-openstackclients > ERROR: Could not find a version that satisfies the requirement > prettytable===0.7.2 (from -c /tmp/wheels/upper-constraints.txt (line > 159)) (from versions: none) > ERROR: No matching distribution found for prettytable===0.7.2 (from -c > /tmp/wheels/upper-constraints.txt (line 159)) stx-platformclients> ERROR: Could not find a version that satisfies the requirement prettytable===0.7.2 (from -c /tmp/wheels/upper-constraints.txt (line 159)) (from versions: none) > ERROR: No matching distribution found for prettytable===0.7.2 (from -c > /tmp/wheels/upper-constraints.txt (line 159)) Sau! [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_flock_images-200.log.html On 6/11/20 9:05 PM, Liu, ZhipengS wrote: > Hi Scott, > > Root cause found! > Please help double check your cengn script. > In the log, I saw you added 4 repos exactly. > But build-stx-bash.sh run just with 2 repos as you see in below log. > I guess you need change "EXTRA_ARGS=" to "EXTRA_ARGS+=" > > + EXTRA_ARGS=' > --repo stx-local-build,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs/RPMS/std --repo stx-mirror-distro,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs/RPMS ' > + '[' ussuri-stable-latest == ussuri-stable-latest ']' > + EXTRA_ARGS=' > --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ > ' > + /localdisk/designer/jenkins/ussuri/cgcs-root/build-tools/build-docker-images/build-stx-base.sh --os centos --os-version 7.5.1804 --stream stable --version ussuri-stable --user starlingx --registry docker.io --attempts 5 --push --latest --latest-tag=ussuri-stable-latest --clean --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ > > Thanks! > Zhipeng > > -----Original Message----- > From: Saul Wold > Sent: 2020年6月12日 1:54 > To: starlingx-discuss at lists.starlingx.io; Hu, Yong ; Liu, ZhipengS > Subject: Ussuri Test build failed > > > Zhipeng, > > Looks like there is a missing dependency issue with the Ussuri build see the logs [0]. > > Summary of Errors: > > Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libaprutil-1.so.0()(64bit) > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: system-logos >= 7.92.1-1 > Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils-python > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libaprutil-1.so.0()(64bit) > Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils > Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) > Requires: scl-utils > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libjansson.so.4()(64bit) > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libapr-1.so.0()(64bit) > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils > Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: libapr-1.so.0()(64bit) > Error: Package: rh-python36-runtime-2.0-1.el7.x86_64 (ussuri-wsgi) > Requires: scl-utils > Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils-python > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: /etc/mime.types > Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) > Requires: policycoreutils-python > > Please take a look into this. > > Thanks > > Sau! > > > > [0] > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_base_image-238.log.html > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From build.starlingx at gmail.com Sat Jun 13 01:32:02 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 12 Jun 2020 21:32:02 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 736 - Failure! Message-ID: <1573226355.1655.1592011923724.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 736 Status: Failure Timestamp: 20200612T234628Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200612T233238Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200612T233238Z DOCKER_BUILD_ID: jenkins-ussuri-20200612T233238Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200612T233238Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200612T233238Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri From build.starlingx at gmail.com Sat Jun 13 01:32:05 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 12 Jun 2020 21:32:05 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 4 - Still Failing! In-Reply-To: <255151149.1649.1591948992013.JavaMail.javamailuser@localhost> References: <255151149.1649.1591948992013.JavaMail.javamailuser@localhost> Message-ID: <663713604.1658.1592011925692.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 4 Status: Still Failing Timestamp: 20200612T233238Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200612T233238Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From sgw at linux.intel.com Sat Jun 13 03:01:01 2020 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 12 Jun 2020 20:01:01 -0700 Subject: [Starlingx-discuss] Ussuri Test build failed In-Reply-To: References: Message-ID: <2d51394b-f6d6-ae41-a7e5-ebb5ed5e7117@linux.intel.com> Zhipeng: Seems like Soctt might have re-fired the build, it's still failing, please take a look. http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200612T233238Z/logs Sau! On 6/12/20 5:18 PM, Liu, ZhipengS wrote: > Hi Scott, > > From log, we are still using > https://raw.githubusercontent.com/openstack/requirements/stable/train/upper-constraints.txt > not > https://raw.githubusercontent.com/openstack/requirements/stable/ussuri/upper-constraints.txt > > You might not cherry pick below patch normally. > https://review.opendev.org/#/c/712880 > > Please make sure cherry pick below 2 patches before start building openstack images. > https://review.opendev.org/712862 > https://review.opendev.org/712880 > > Thanks! > Zhipeng > > -----Original Message----- > From: Saul Wold > Sent: 2020年6月13日 0:17 > To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS > Subject: Re: [Starlingx-discuss] Ussuri Test build failed > > > Turns out that Scott might have found that and fixed it in a second build that happened yesterday afternoon, but the Failure notification does not seem to have been sent. > > The failed build logs [0] still seem to show a variety of missing dependencies. > > There might also be another merge conflict. > > > There were 10 failures: > stx-cinder> ERROR: Could not find a version that satisfies the > requirement google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) >> ERROR: No matching distribution found for >> google-api-python-client===1.7.11 (from -c >> /tmp/wheels/upper-constraints.txt (line 303)) > stx-fm-rest-api >> ERROR: Could not find a version that satisfies the requirement >> pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) >> (from versions: 0.6.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, >> 0.9.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, >> 1.2, 1.2.1, 1.3.1, 1.3.2) >> ERROR: No matching distribution found for pecan===1.3.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 21))stx-glance >> ERROR: Could not find a version that satisfies the requirement >> networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) >> (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, >> 2.4rc2, 2.4) >> ERROR: No matching distribution found for networkx===2.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 101))stx-gnocchi >> ERROR: Could not find a version that satisfies the requirement >> uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line >> 705)) (from versions: none) >> ERROR: No matching distribution found for uWSGI===2.0.17.1 (from -c >> /tmp/wheels/upper-constraints.txt (line 705)) > stx-heat >> ERROR: Could not find a version that satisfies the requirement >> networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) >> (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, >> 2.4rc2, 2.4) >> ERROR: No matching distribution found for networkx===2.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 101))stx-keystone-api-proxy >> ERROR: Could not find a version that satisfies the requirement >> pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) >> (from versions: 0.6.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, >> 0.9.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, >> 1.2, 1.2.1, 1.3.1, 1.3.2) >> ERROR: No matching distribution found for pecan===1.3.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 21)) > stx-nova >> ERROR: Could not find a version that satisfies the requirement >> networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) >> (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, >> 2.4rc2, 2.4) >> ERROR: No matching distribution found for networkx===2.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 101)) > stx-nova-api-proxy >> ERROR: Could not find a version that satisfies the requirement >> repoze.lru===0.7 (from -c /tmp/wheels/upper-constraints.txt (line >> 571)) (from versions: none) >> ERROR: No matching distribution found for repoze.lru===0.7 (from -c >> /tmp/wheels/upper-constraints.txt (line 571)) > stx-openstackclients >> ERROR: Could not find a version that satisfies the requirement >> prettytable===0.7.2 (from -c /tmp/wheels/upper-constraints.txt (line >> 159)) (from versions: none) >> ERROR: No matching distribution found for prettytable===0.7.2 (from -c >> /tmp/wheels/upper-constraints.txt (line 159)) > stx-platformclients> ERROR: Could not find a version that satisfies the > requirement prettytable===0.7.2 (from -c /tmp/wheels/upper-constraints.txt (line 159)) (from versions: none) >> ERROR: No matching distribution found for prettytable===0.7.2 (from -c >> /tmp/wheels/upper-constraints.txt (line 159)) > > Sau! > > [0] > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_flock_images-200.log.html > > > > On 6/11/20 9:05 PM, Liu, ZhipengS wrote: >> Hi Scott, >> >> Root cause found! >> Please help double check your cengn script. >> In the log, I saw you added 4 repos exactly. >> But build-stx-bash.sh run just with 2 repos as you see in below log. >> I guess you need change "EXTRA_ARGS=" to "EXTRA_ARGS+=" >> >> + EXTRA_ARGS=' >> --repo stx-local-build,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs/RPMS/std --repo stx-mirror-distro,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs/RPMS ' >> + '[' ussuri-stable-latest == ussuri-stable-latest ']' >> + EXTRA_ARGS=' >> --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ >> ' >> + /localdisk/designer/jenkins/ussuri/cgcs-root/build-tools/build-docker-images/build-stx-base.sh --os centos --os-version 7.5.1804 --stream stable --version ussuri-stable --user starlingx --registry docker.io --attempts 5 --push --latest --latest-tag=ussuri-stable-latest --clean --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Saul Wold >> Sent: 2020年6月12日 1:54 >> To: starlingx-discuss at lists.starlingx.io; Hu, Yong ; Liu, ZhipengS >> Subject: Ussuri Test build failed >> >> >> Zhipeng, >> >> Looks like there is a missing dependency issue with the Ussuri build see the logs [0]. >> >> Summary of Errors: >> >> Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: libaprutil-1.so.0()(64bit) >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: system-logos >= 7.92.1-1 >> Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: policycoreutils-python >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: libaprutil-1.so.0()(64bit) >> Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: policycoreutils >> Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) >> Requires: scl-utils >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: libjansson.so.4()(64bit) >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: libapr-1.so.0()(64bit) >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: policycoreutils >> Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: libapr-1.so.0()(64bit) >> Error: Package: rh-python36-runtime-2.0-1.el7.x86_64 (ussuri-wsgi) >> Requires: scl-utils >> Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) >> Requires: policycoreutils-python >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: /etc/mime.types >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: policycoreutils-python >> >> Please take a look into this. >> >> Thanks >> >> Sau! >> >> >> >> [0] >> http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_base_image-238.log.html >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Ghada.Khalil at windriver.com Sat Jun 13 03:11:40 2020 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Sat, 13 Jun 2020 03:11:40 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - June 11/2020 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases stx.4.0 - StarlingX has been confirmed as an openstack project. Yay! Congratulations everyone! - Feature Status: https://docs.google.com/spreadsheets/d/1a93wt0XO0_JvajnPzQwnqFkXfdDysKVnHpbrEc17_yg/edit#gid=1107209846 - Feature Checkpoints: - Openstack Rebase to Ussuri - Test build failed, appears to be a package dependency issue - Need to have this resolved before allowing the code to be merged - Need to wait for the MS3 declaration until - Longer Term Discussion: How do we make rebasing to the openstack release easier and more predictable? - Some ideas: - Increase our LAG after openstack by more than 6wks - Track master right away as opposed to wait for an integration when the openstack release is available - this was discussed before. why didn't we proceed with this at the beginning of stx.4.0? stability concerns? - May want to do that for the containerized services, but keep the platform pieces on a stable release - Get input from the Ussuri team on their suggestions on how to make this more predictable - Send thread to the mailing list and an etherpad to solicit input from the community members - Action: Ghada before next community meeting - June - Agreed to have a recommendation before setting the milestone dates for stx.5.0 - Kubernetes Component Upversion - helm v3 - Merging currently; should be in by EOW - B&R - Dev Complete. Only bugs remain; tracked under LPs. - System Upgrades - Dev Complete. All foundational code is in. Fixing bugs based on testing. - Flock Versioning - Agreed to defer to stx.5.0; will target to do early in the next release From build.starlingx at gmail.com Sat Jun 13 03:50:07 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 12 Jun 2020 23:50:07 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 423 - Failure! Message-ID: <605977258.1662.1592020208174.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 423 Status: Failure Timestamp: 20200613T023151Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200613T022025Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20200613T022025Z DOCKER_BUILD_ID: jenkins-master-distro-20200613T022025Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200613T022025Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20200613T022025Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro BUILD_ISO: false From build.starlingx at gmail.com Sat Jun 13 03:50:10 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 12 Jun 2020 23:50:10 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_distro_master_master - Build # 149 - Still Failing! In-Reply-To: <472469983.1645.1591930912667.JavaMail.javamailuser@localhost> References: <472469983.1645.1591930912667.JavaMail.javamailuser@localhost> Message-ID: <1597043133.1665.1592020210614.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 149 Status: Still Failing Timestamp: 20200613T022025Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200613T022025Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Sat Jun 13 14:00:19 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 13 Jun 2020 10:00:19 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 738 - Failure! Message-ID: <1146638409.1669.1592056819840.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 738 Status: Failure Timestamp: 20200613T121230Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200613T120010Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200613T120010Z DOCKER_BUILD_ID: jenkins-ussuri-20200613T120010Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200613T120010Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200613T120010Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri From build.starlingx at gmail.com Sat Jun 13 14:00:21 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 13 Jun 2020 10:00:21 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 5 - Still Failing! In-Reply-To: <1684325166.1656.1592011924182.JavaMail.javamailuser@localhost> References: <1684325166.1656.1592011924182.JavaMail.javamailuser@localhost> Message-ID: <941495673.1672.1592056822253.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 5 Status: Still Failing Timestamp: 20200613T120010Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200613T120010Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From vnish11 at gmail.com Sat Jun 13 18:20:08 2020 From: vnish11 at gmail.com (Nishant Verma) Date: Sat, 13 Jun 2020 14:20:08 -0400 Subject: [Starlingx-discuss] hands on Message-ID: Hi All, >From where I can watch the recording of starlingX hands-on workshop held at OS Summit. https://www.openstack.org/summit/denver-2019/summit-schedule/events/23630/starlingx-hands-on-workshop I am not able to find the link for the same. Please share the link or file if someone has access to it. Thanks in advance. -- Rgds, Nishant -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sun Jun 14 03:01:30 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 13 Jun 2020 23:01:30 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 425 - Failure! Message-ID: <568970242.1676.1592103690727.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 425 Status: Failure Timestamp: 20200614T014348Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200614T013202Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20200614T013202Z DOCKER_BUILD_ID: jenkins-master-distro-20200614T013202Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200614T013202Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20200614T013202Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro BUILD_ISO: false From build.starlingx at gmail.com Sun Jun 14 03:01:32 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 13 Jun 2020 23:01:32 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_distro_master_master - Build # 150 - Still Failing! In-Reply-To: <1607218384.1663.1592020208846.JavaMail.javamailuser@localhost> References: <1607218384.1663.1592020208846.JavaMail.javamailuser@localhost> Message-ID: <2129431324.1679.1592103692799.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 150 Status: Still Failing Timestamp: 20200614T013202Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200614T013202Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From ildiko.vancsa at gmail.com Sun Jun 14 07:34:37 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 14 Jun 2020 09:34:37 +0200 Subject: [Starlingx-discuss] hands on In-Reply-To: References: Message-ID: <7E777D7A-C0D7-4EBF-B945-86E3288EDD76@gmail.com> Hi Nishant, Unfortunately we don’t record the hands-on workshops. The courses are self paced and the mentors in the room are helping the attendees individually with their questions during each step. Due to this format the recording would not have value after the event. You can find some pointers to introductory materials here: https://www.starlingx.io/learn/ You can find links here to the ISO images as well as documentation to deploy the platform to test: https://www.starlingx.io/software/ Is there any particular feature of StarlingX you would be interested in or you would like to deploy and evaluate the platform? Thanks, Ildikó > On Jun 13, 2020, at 20:20, Nishant Verma wrote: > > Hi All, > > From where I can watch the recording of starlingX hands-on workshop held at OS Summit. > https://www.openstack.org/summit/denver-2019/summit-schedule/events/23630/starlingx-hands-on-workshop > > I am not able to find the link for the same. Please share the link or file if someone has access to it. > > Thanks in advance. > > > -- > Rgds, > Nishant > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From austin.sun at intel.com Sun Jun 14 12:29:45 2020 From: austin.sun at intel.com (Sun, Austin) Date: Sun, 14 Jun 2020 12:29:45 +0000 Subject: [Starlingx-discuss] hands on In-Reply-To: <7E777D7A-C0D7-4EBF-B945-86E3288EDD76@gmail.com> References: <7E777D7A-C0D7-4EBF-B945-86E3288EDD76@gmail.com> Message-ID: Hi Nishant: I think that hands-on are based on stx.2.0 . the bootstrap , deploy has some changed since stx.3.0 There is other stx 3.0 training materials [1] shared by Frank before. Not hands-on , but a lots of details architecture and stx flock services introduce. Hope this is useful . [1] https://drive.google.com/drive/folders/1AvUCq3ojuhNZV6XE8YdRhp9PVxixRIeE Thanks. BR Austin Sun. -----Original Message----- From: Ildiko Vancsa Sent: Sunday, June 14, 2020 3:35 PM To: Nishant Verma Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] hands on Hi Nishant, Unfortunately we don’t record the hands-on workshops. The courses are self paced and the mentors in the room are helping the attendees individually with their questions during each step. Due to this format the recording would not have value after the event. You can find some pointers to introductory materials here: https://www.starlingx.io/learn/ You can find links here to the ISO images as well as documentation to deploy the platform to test: https://www.starlingx.io/software/ Is there any particular feature of StarlingX you would be interested in or you would like to deploy and evaluate the platform? Thanks, Ildikó > On Jun 13, 2020, at 20:20, Nishant Verma wrote: > > Hi All, > > From where I can watch the recording of starlingX hands-on workshop held at OS Summit. > https://www.openstack.org/summit/denver-2019/summit-schedule/events/23630/starlingx-hands-on-workshop > > I am not able to find the link for the same. Please share the link or file if someone has access to it. > > Thanks in advance. > > > -- > Rgds, > Nishant > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Sun Jun 14 13:08:03 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Sun, 14 Jun 2020 13:08:03 +0000 Subject: [Starlingx-discuss] Ussuri Test build failed In-Reply-To: <2d51394b-f6d6-ae41-a7e5-ebb5ed5e7117@linux.intel.com> References: <2d51394b-f6d6-ae41-a7e5-ebb5ed5e7117@linux.intel.com> Message-ID: Hi Saul and Scott, Build error was caused by rebasing my patch after one patch was merged one day before. https://review.opendev.org/#/c/720135/ Upgrade openstack-helm-infra As https://review.opendev.org/#/c/719974/ merged, it causes a conflict. @Jim Gauld 0013-Update-ingress-chart-for-Helm-v3.patch introduced in this patch is no need anymore as the rebased openstack-helm-infra by my patch already includes this change. I have updated my patch below. https://review.opendev.org/#/c/720135/ Thanks! Zhipeng -----Original Message----- From: Saul Wold Sent: 2020年6月13日 11:01 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Scott Little Subject: Re: [Starlingx-discuss] Ussuri Test build failed Zhipeng: Seems like Soctt might have re-fired the build, it's still failing, please take a look. http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200612T233238Z/logs Sau! On 6/12/20 5:18 PM, Liu, ZhipengS wrote: > Hi Scott, > > From log, we are still using > https://raw.githubusercontent.com/openstack/requirements/stable/train/ > upper-constraints.txt > not > https://raw.githubusercontent.com/openstack/requirements/stable/ussuri > /upper-constraints.txt > > You might not cherry pick below patch normally. > https://review.opendev.org/#/c/712880 > > Please make sure cherry pick below 2 patches before start building openstack images. > https://review.opendev.org/712862 > https://review.opendev.org/712880 > > Thanks! > Zhipeng > > -----Original Message----- > From: Saul Wold > Sent: 2020年6月13日 0:17 > To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS > > Subject: Re: [Starlingx-discuss] Ussuri Test build failed > > > Turns out that Scott might have found that and fixed it in a second build that happened yesterday afternoon, but the Failure notification does not seem to have been sent. > > The failed build logs [0] still seem to show a variety of missing dependencies. > > There might also be another merge conflict. > > > There were 10 failures: > stx-cinder> ERROR: Could not find a version that satisfies the > requirement google-api-python-client===1.7.11 (from -c > /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, > 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, > 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, > 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) >> ERROR: No matching distribution found for >> google-api-python-client===1.7.11 (from -c >> /tmp/wheels/upper-constraints.txt (line 303)) > stx-fm-rest-api >> ERROR: Could not find a version that satisfies the requirement >> pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) >> (from versions: 0.6.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, >> 0.9.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, >> 1.2, 1.2.1, 1.3.1, 1.3.2) >> ERROR: No matching distribution found for pecan===1.3.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 21))stx-glance >> ERROR: Could not find a version that satisfies the requirement >> networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) >> (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, >> 2.4rc2, 2.4) >> ERROR: No matching distribution found for networkx===2.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 101))stx-gnocchi >> ERROR: Could not find a version that satisfies the requirement >> uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line >> 705)) (from versions: none) >> ERROR: No matching distribution found for uWSGI===2.0.17.1 (from -c >> /tmp/wheels/upper-constraints.txt (line 705)) > stx-heat >> ERROR: Could not find a version that satisfies the requirement >> networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) >> (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, >> 2.4rc2, 2.4) >> ERROR: No matching distribution found for networkx===2.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 101))stx-keystone-api-proxy >> ERROR: Could not find a version that satisfies the requirement >> pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) >> (from versions: 0.6.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, >> 0.9.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0, 1.1.1, 1.1.2, >> 1.2, 1.2.1, 1.3.1, 1.3.2) >> ERROR: No matching distribution found for pecan===1.3.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 21)) > stx-nova >> ERROR: Could not find a version that satisfies the requirement >> networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) >> (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, >> 2.4rc2, 2.4) >> ERROR: No matching distribution found for networkx===2.3 (from -c >> /tmp/wheels/upper-constraints.txt (line 101)) > stx-nova-api-proxy >> ERROR: Could not find a version that satisfies the requirement >> repoze.lru===0.7 (from -c /tmp/wheels/upper-constraints.txt (line >> 571)) (from versions: none) >> ERROR: No matching distribution found for repoze.lru===0.7 (from -c >> /tmp/wheels/upper-constraints.txt (line 571)) > stx-openstackclients >> ERROR: Could not find a version that satisfies the requirement >> prettytable===0.7.2 (from -c /tmp/wheels/upper-constraints.txt (line >> 159)) (from versions: none) >> ERROR: No matching distribution found for prettytable===0.7.2 (from >> -c /tmp/wheels/upper-constraints.txt (line 159)) > stx-platformclients> ERROR: Could not find a version that satisfies > stx-platformclients> the > requirement prettytable===0.7.2 (from -c > /tmp/wheels/upper-constraints.txt (line 159)) (from versions: none) >> ERROR: No matching distribution found for prettytable===0.7.2 (from >> -c /tmp/wheels/upper-constraints.txt (line 159)) > > Sau! > > [0] > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monoli > thic/20200611T142734Z/logs/jenkins-STX_build_docker_flock_images-200.l > og.html > > > > On 6/11/20 9:05 PM, Liu, ZhipengS wrote: >> Hi Scott, >> >> Root cause found! >> Please help double check your cengn script. >> In the log, I saw you added 4 repos exactly. >> But build-stx-bash.sh run just with 2 repos as you see in below log. >> I guess you need change "EXTRA_ARGS=" to "EXTRA_ARGS+=" >> >> + EXTRA_ARGS=' >> --repo stx-local-build,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs/RPMS/std --repo stx-mirror-distro,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs/RPMS ' >> + '[' ussuri-stable-latest == ussuri-stable-latest ']' >> + EXTRA_ARGS=' >> --repo >> ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download >> .ceph.com/rpm-mimic/el7/x86_64/ --repo >> ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/m >> irror.centos.org/centos/7/sclo/x86_64/rh/ >> ' >> + /localdisk/designer/jenkins/ussuri/cgcs-root/build-tools/build-dock >> + er-images/build-stx-base.sh --os centos --os-version 7.5.1804 >> + --stream stable --version ussuri-stable --user starlingx --registry >> + docker.io --attempts 5 --push --latest >> + --latest-tag=ussuri-stable-latest --clean --repo >> + ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/downlo >> + ad.ceph.com/rpm-mimic/el7/x86_64/ --repo >> + ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos >> + /mirror.centos.org/centos/7/sclo/x86_64/rh/ >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Saul Wold >> Sent: 2020年6月12日 1:54 >> To: starlingx-discuss at lists.starlingx.io; Hu, Yong >> ; Liu, ZhipengS >> Subject: Ussuri Test build failed >> >> >> Zhipeng, >> >> Looks like there is a missing dependency issue with the Ussuri build see the logs [0]. >> >> Summary of Errors: >> >> Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 >> (ussuri-wsgi) >> Requires: libaprutil-1.so.0()(64bit) >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: system-logos >= 7.92.1-1 >> Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 >> (ussuri-wsgi) >> Requires: policycoreutils-python >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: libaprutil-1.so.0()(64bit) >> Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 >> (ussuri-wsgi) >> Requires: policycoreutils >> Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) >> Requires: scl-utils >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: libjansson.so.4()(64bit) >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: libapr-1.so.0()(64bit) >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: policycoreutils >> Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 >> (ussuri-wsgi) >> Requires: libapr-1.so.0()(64bit) >> Error: Package: rh-python36-runtime-2.0-1.el7.x86_64 (ussuri-wsgi) >> Requires: scl-utils >> Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) >> Requires: policycoreutils-python >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: /etc/mime.types >> Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) >> Requires: policycoreutils-python >> >> Please take a look into this. >> >> Thanks >> >> Sau! >> >> >> >> [0] >> http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monol >> ithic/20200611T142734Z/logs/jenkins-STX_build_docker_base_image-238.l >> og.html _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From build.starlingx at gmail.com Sun Jun 14 13:59:28 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 14 Jun 2020 09:59:28 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 740 - Failure! Message-ID: <2144104949.1683.1592143169042.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 740 Status: Failure Timestamp: 20200614T121216Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200614T120011Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200614T120011Z DOCKER_BUILD_ID: jenkins-ussuri-20200614T120011Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200614T120011Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200614T120011Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri From build.starlingx at gmail.com Sun Jun 14 13:59:30 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 14 Jun 2020 09:59:30 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 6 - Still Failing! In-Reply-To: <955001007.1670.1592056820456.JavaMail.javamailuser@localhost> References: <955001007.1670.1592056820456.JavaMail.javamailuser@localhost> Message-ID: <540472242.1686.1592143171090.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 6 Status: Still Failing Timestamp: 20200614T120011Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200614T120011Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From alfredo.deluca at gmail.com Sun Jun 14 15:50:27 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sun, 14 Jun 2020 17:50:27 +0200 Subject: [Starlingx-discuss] Subcloud on a Virtual Machine In-Reply-To: References: Message-ID: Thanks Barton. appreciate it. If we will come up with something that works I ll let you know and maybe post the results. Cheers On Wed, Jun 10, 2020 at 5:09 PM Wensley, Barton < Barton.Wensley at windriver.com> wrote: > Alfredo, > > > > We support installing StarlingX in VMs using either KVM or VirtualBox – > see the instructions at > https://docs.starlingx.io/deploy_install_guides/index.html. > > > > We don’t have instructions for installing StarlingX in OpenStack VMs. To > do this you would likely want to generate a qcow2 image (using KVM or > VirtualBox). I can’t help you with this and based on the lack of response > on the list I don’t think others have done this either. If you figure this > out it would be great if you could share your findings with the community. > > > > Bart > > > > *From:* Alfredo De Luca > *Sent:* June 8, 2020 6:00 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Subcloud on a Virtual Machine > > > > Hi all. > > Any thoughts on this? Also has anyone ever tried this solution with > StarlingX on Virtual Machine at all? > > > > Cheers > > > > > > On Wed, Jun 3, 2020 at 9:05 PM Alfredo De Luca > wrote: > > Hi all. > > For testing purposes we are trying to install a subcloud on a VM > (Openstack to be precise) but we get a couple of errors as below. Booting > from an ISO (STX 3.0) we get this > > > > 1. ERROR: Specified installation (sda) or boot (sda) device is invalid. > > then I supposed the ISO is looking for a device *sda* .. so we fixed that > but then another issue occurred and the error now is > > 2. Disk "" given in clearpart command does not exist. > > Now I wonder if it is possible to install that on top of a VM and also > what could it the fix for the second error. > > Any idea/clue? > > > > Cheers > > > > > > -- > > */Alfredo* > > > > > > > -- > > */Alfredo* > > > -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Jun 15 14:01:26 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 15 Jun 2020 10:01:26 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 743 - Failure! Message-ID: <1992368415.1694.1592229687933.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 743 Status: Failure Timestamp: 20200615T121335Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200615T120015Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200615T120015Z DOCKER_BUILD_ID: jenkins-ussuri-20200615T120015Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200615T120015Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200615T120015Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri From build.starlingx at gmail.com Mon Jun 15 14:01:29 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 15 Jun 2020 10:01:29 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! In-Reply-To: <2043706076.1684.1592143169497.JavaMail.javamailuser@localhost> References: <2043706076.1684.1592143169497.JavaMail.javamailuser@localhost> Message-ID: <740699650.1697.1592229690417.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 7 Status: Still Failing Timestamp: 20200615T120015Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200615T120015Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From zhipengs.liu at intel.com Mon Jun 15 15:03:26 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 15 Jun 2020 15:03:26 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! In-Reply-To: <740699650.1697.1592229690417.JavaMail.javamailuser@localhost> References: <2043706076.1684.1592143169497.JavaMail.javamailuser@localhost> <740699650.1697.1592229690417.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, Could you help retrigger the build again? I have updated my patch with the comment in my another email. Thanks! Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月15日 22:01 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! Project: STX_build_master_ussuri Build #: 7 Status: Still Failing Timestamp: 20200615T120015Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200615T120015Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From sgw at linux.intel.com Mon Jun 15 15:14:02 2020 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 15 Jun 2020 08:14:02 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! In-Reply-To: References: <2043706076.1684.1592143169497.JavaMail.javamailuser@localhost> <740699650.1697.1592229690417.JavaMail.javamailuser@localhost> Message-ID: <46bcbc3b-bbe4-f805-369f-465884266cf8@linux.intel.com> On 6/15/20 8:03 AM, Liu, ZhipengS wrote: > Hi Scott, > > Could you help retrigger the build again? > I have updated my patch with the comment in my another email. > Does he have to re-merge the patch before the trigger? Sau! > Thanks! > Zhipeng > > -----Original Message----- > From: build.starlingx at gmail.com > Sent: 2020年6月15日 22:01 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! > > Project: STX_build_master_ussuri > Build #: 7 > Status: Still Failing > Timestamp: 20200615T120015Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200615T120015Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From zhipengs.liu at intel.com Mon Jun 15 15:39:45 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 15 Jun 2020 15:39:45 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! In-Reply-To: <46bcbc3b-bbe4-f805-369f-465884266cf8@linux.intel.com> References: <2043706076.1684.1592143169497.JavaMail.javamailuser@localhost> <740699650.1697.1592229690417.JavaMail.javamailuser@localhost> <46bcbc3b-bbe4-f805-369f-465884266cf8@linux.intel.com> Message-ID: From the latest log, I saw mirror/starlingx/ussuri/centos/monolithic/20200615T120015Z/logs/std/failed-packages/openstack-helm-infra-1.0-16.tis/ In my latest update that I submitted yesterday, it should be changed to 1.0-17. So, it seems not fetching the latest update. https://review.opendev.org/720135 Zhipeng -----Original Message----- From: Saul Wold Sent: 2020年6月15日 23:14 To: starlingx-discuss at lists.starlingx.io; Scott Little ; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! On 6/15/20 8:03 AM, Liu, ZhipengS wrote: > Hi Scott, > > Could you help retrigger the build again? > I have updated my patch with the comment in my another email. > Does he have to re-merge the patch before the trigger? Sau! > Thanks! > Zhipeng > > -----Original Message----- > From: build.starlingx at gmail.com > Sent: 2020年6月15日 22:01 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! > > Project: STX_build_master_ussuri > Build #: 7 > Status: Still Failing > Timestamp: 20200615T120015Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200615T120015Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From zhipengs.liu at intel.com Mon Jun 15 15:48:39 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 15 Jun 2020 15:48:39 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月11日 9:43 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From alexandru.dimofte at intel.com Mon Jun 15 20:14:33 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Mon, 15 Jun 2020 20:14:33 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200614T171307Z Message-ID: Sanity Test from 2020-June-14 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200614T171307Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200614T171307Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image002.png at 01D6436A.8EF5B4F0] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 20507 bytes Desc: image002.png URL: From zhipengs.liu at intel.com Tue Jun 16 00:18:03 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 16 Jun 2020 00:18:03 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! In-Reply-To: References: <2043706076.1684.1592143169497.JavaMail.javamailuser@localhost> <740699650.1697.1592229690417.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, As I mentioned in IRC, besides https://review.opendev.org/starlingx/openstack-armada-app refs/changes/35/720135/12 below dependent patch also needs to be updated before trigger build again. https://review.opendev.org/starlingx/openstack-armada-app refs/changes/61/731461/10 Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月15日 23:03 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! Hi Scott, Could you help retrigger the build again? I have updated my patch with the comment in my another email. Thanks! Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月15日 22:01 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 7 - Still Failing! Project: STX_build_master_ussuri Build #: 7 Status: Still Failing Timestamp: 20200615T120015Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200615T120015Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From yong.hu at intel.com Tue Jun 16 05:45:40 2020 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 16 Jun 2020 05:45:40 +0000 Subject: [Starlingx-discuss] StarlingX Distro-OpenStack: Bi-weekly Project Meeting Message-ID: Folks, Here is agenda for today: - Overall STX status and 4.0 release progress - OpenStack “U” upgrade on patch status and test build status - Zhipeng from Intel and Chant from 99Cloud. - LaunchPad review Regards, Yong From: yong.hu at intel.com When: 9:00 PM - 9:30 PM June 16, 2020 Subject: StarlingX Distro-OpenStack: Bi-weekly Project Meeting Location: https://zoom.us/j/342730236     Hi folks,  This is a new series of bi-weekly project meeting on StarlingX Distro-OpenStack.  Your participation to this meeting and/or other offline contribution by all means are highly appreciated!    Join the meeting: https://zoom.us/j/342730236   Project Team Etherpad: https://etherpad.openstack.org/p/stx-distro-openstack-meetings     PS:  from now to next summer early, we are going to keep this time slot to accommodate US standard time (6:00 AM in the morning).    regards,  Yong Hu  From austin.sun at intel.com Tue Jun 16 07:25:50 2020 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 16 Jun 2020 07:25:50 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 6/17/2020 Message-ID: Hi All: Agenda for 6/17 meeting: - Auto version integ/kernel repo: https://storyboard.openstack.org/#!/story/2007750 https://review.opendev.org/#/q/topic:pkg-versioning+(status:open+OR+status:merged) - ceph containerization: - centos8 and python3 - bugs: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other H: https://bugs.launchpad.net/starlingx/+bug/1882172 - open: If have any other topic, feel free to add to https://etherpad.openstack.org/p/stx-distro-other Thanks. BR Austin Sun. From ildiko.vancsa at gmail.com Tue Jun 16 10:07:45 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 16 Jun 2020 12:07:45 +0200 Subject: [Starlingx-discuss] 2nd OSF Edge Computing Group white paper is now live! Message-ID: <76847E3C-CBE6-4D95-828C-31330D2C98E2@gmail.com> Hi, I’m reaching out to you to draw your attention to the 2nd white paper[1] of the Edge Computing Group that was published[2] yesterday and I would like to ask for your help to spread the word about it. I would also like to thank all the contributors who participated in putting together and editing the content. Happy reading! :) Thanks and Best Regards, Ildikó [1] https://www.openstack.org/edge-computing/edge-computing-next-steps-in-architecture-design-and-testing [2] https://www.openstack.org/news/view/455/edge-computing-use-cases-gain-momentum-in-open-infrastructure-communityopenstack-foundation From Volker.Hoesslin at swsn.de Tue Jun 16 10:44:49 2020 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Tue, 16 Jun 2020 10:44:49 +0000 Subject: [Starlingx-discuss] Unable to log in controller-1 after changing password on active controller-0 In-Reply-To: References: Message-ID: try to re-install controller-1 ? Von: Shashi, Yatindra [yatindra.shashi at intel.com] Gesendet: Freitag, 12. Juni 2020 12:13 An: starlingx-discuss at lists.starlingx.io Betreff: [URL wurde verändert] [Starlingx-discuss] Unable to log in controller-1 after changing password on active controller-0 Externe E-Mail! Öffnen Sie nur Links oder Anhänge von vertrauenswürdigen Absendern! Hi All, In AIO-Duplex Setup 3.0 As after certain days Stx force user to change the Password, I changed the password in the Controller-0 but I did not do on the cont-1. I had locked/unlocked Cont-1 and tried to login with old/new password but I get access denied. Is there way to reset or change sysadmin Password of cont-1. I am able to login to dashboard and cont-0 with the password I had. Mit freundlichen Grüßen/ with best regards, Yatindra Shashi IoTG DE- Intel Corporation Munich, Germany P Save Paper, Go Digital :) Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, https://sis-schwerin.de/externer-link/?href=www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Jun 16 12:17:14 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 16 Jun 2020 08:17:14 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 745 - Failure! Message-ID: <1662070252.1701.1592309837580.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 745 Status: Failure Timestamp: 20200616T121348Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T120013Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200616T120013Z DOCKER_BUILD_ID: jenkins-ussuri-20200616T120013Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T120013Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200616T120013Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri From build.starlingx at gmail.com Tue Jun 16 12:17:19 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 16 Jun 2020 08:17:19 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 8 - Still Failing! In-Reply-To: <2043274140.1695.1592229688789.JavaMail.javamailuser@localhost> References: <2043274140.1695.1592229688789.JavaMail.javamailuser@localhost> Message-ID: <621575579.1704.1592309839848.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 8 Status: Still Failing Timestamp: 20200616T120013Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T120013Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From Ian.Jolliffe at windriver.com Tue Jun 16 15:09:32 2020 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Tue, 16 Jun 2020 15:09:32 +0000 Subject: [Starlingx-discuss] [TSC] TSC minutes - 6/10 Message-ID: Confirmation Board meeting Once again congratulations to the whole community on achieving confirmation of the StarlingX project Thank you to the OpenStack Foundation and Board for their support during the process from Pilot to Confirmation. PTG outcomes and consolidated notes Ildiko – to take a first pass [1] Others please review We will turn that into a Blog post We agreed to restart Multi-OS – Saul to work on updating governance repo [1] https://etherpad.opendev.org/p/stx-virtual-ptg-blog-june-2020 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jun 16 17:06:13 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 16 Jun 2020 17:06:13 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Hi Chris and Bob, Could you help to review below Ussuri upgrade patches again? https://review.opendev.org/#/q/topic:for_ussuri+(status:open) We need your great help to push them merge! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月15日 23:49 To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月11日 9:43 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Tue Jun 16 17:09:01 2020 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 16 Jun 2020 11:09:01 -0600 Subject: [Starlingx-discuss] nominating Jerry Sun for core on oidc-auth-armada-app Message-ID: I'd like to propose adding Jerry Sun as a core for the oidc-auth-armada-app StarlingX git repo. Jerry has been working in this area since January, and has added stx-oidc-auth-helm and dex-helm as new packages and has reviewed code submitted by others.  He would make a useful addition to the core team. Chris From Robert.Church at windriver.com Tue Jun 16 17:11:15 2020 From: Robert.Church at windriver.com (Church, Robert) Date: Tue, 16 Jun 2020 17:11:15 +0000 Subject: [Starlingx-discuss] nominating Jerry Sun for core on oidc-auth-armada-app In-Reply-To: References: Message-ID: <17E7B9AF-6804-4844-95AD-595D4B9A4D32@windriver.com> +1 Bob On 6/16/20, 12:09 PM, "Chris Friesen" wrote: I'd like to propose adding Jerry Sun as a core for the oidc-auth-armada-app StarlingX git repo. Jerry has been working in this area since January, and has added stx-oidc-auth-helm and dex-helm as new packages and has reviewed code submitted by others. He would make a useful addition to the core team. Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Greg.Waines at windriver.com Tue Jun 16 17:36:24 2020 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 16 Jun 2020 17:36:24 +0000 Subject: [Starlingx-discuss] nominating Jerry Sun for core on oidc-auth-armada-app In-Reply-To: References: Message-ID: +1 From: Chris Friesen Date: Tuesday, June 16, 2020 at 1:10 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] nominating Jerry Sun for core on oidc-auth-armada-app I'd like to propose adding Jerry Sun as a core for the oidc-auth-armada-app StarlingX git repo. Jerry has been working in this area since January, and has added stx-oidc-auth-helm and dex-helm as new packages and has reviewed code submitted by others. He would make a useful addition to the core team. Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Tue Jun 16 17:58:52 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 16 Jun 2020 17:58:52 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200616T000950Z Message-ID: Sanity Test from 2020-June-16 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200616T000950Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200616T000950Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D64420.EE3EE5B0] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20507 bytes Desc: image003.png URL: From Sabeel.Ansari at windriver.com Tue Jun 16 18:36:05 2020 From: Sabeel.Ansari at windriver.com (Ansari, Sabeel) Date: Tue, 16 Jun 2020 18:36:05 +0000 Subject: [Starlingx-discuss] nominating Jerry Sun for core on oidc-auth-armada-app In-Reply-To: References: Message-ID: +1 -----Original Message----- From: Chris Friesen Sent: Tuesday, June 16, 2020 1:09 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] nominating Jerry Sun for core on oidc-auth-armada-app I'd like to propose adding Jerry Sun as a core for the oidc-auth-armada-app StarlingX git repo. Jerry has been working in this area since January, and has added stx-oidc-auth-helm and dex-helm as new packages and has reviewed code submitted by others.  He would make a useful addition to the core team. Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Tue Jun 16 19:06:05 2020 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 16 Jun 2020 13:06:05 -0600 Subject: [Starlingx-discuss] nominating Jerry Sun for core on oidc-auth-armada-app -- nomination carried In-Reply-To: References: Message-ID: <47c2d158-4a12-b55a-577e-3eac44e3d268@windriver.com> We have agreement from the current cores...welcome to Jerry as a new core! Chris On 6/16/2020 11:09 AM, Chris Friesen wrote: > I'd like to propose adding Jerry Sun as a core for the > oidc-auth-armada-app StarlingX git repo. > > Jerry has been working in this area since January, and has added > stx-oidc-auth-helm and dex-helm as new packages and has reviewed code > submitted by others.  He would make a useful addition to the core team. > > Chris > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Tue Jun 16 19:39:14 2020 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 16 Jun 2020 12:39:14 -0700 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> Message-ID: Can we get some action on this nomination please. Thanks Sau! On 6/9/20 8:32 PM, Hu, Yong wrote: > Hi cores, > I would like to nominate these 2 guys as core reviewers in following project: > starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net > starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net > > Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. > > Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. > So, please let us know your feedback. Thanks! > > Regards, > Yong > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Frank.Miller at windriver.com Tue Jun 16 20:22:27 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 16 Jun 2020 20:22:27 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> Message-ID: I think it makes sense to add 1 prime to existing repos. As I see a lot of commits and review comments and emails on the mailing list from Zhipeng I would think it makes the most sense to add Zhipeng as a Core reviewer to the openstack-armada-app and stx-upstream repos. Frank -----Original Message----- From: Hu, Yong Sent: Tuesday, June 09, 2020 11:32 PM To: starlingx Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks!   Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Jun 16 22:49:35 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 16 Jun 2020 18:49:35 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 229 - Failure! Message-ID: <836206159.1711.1592347776223.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 229 Status: Failure Timestamp: 20200616T155749Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200616T130458Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs MASTER_BUILD_NUMBER: 9 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200616T130458Z DOCKER_BUILD_ID: jenkins-ussuri-20200616T130458Z-builder TIMESTAMP: 20200616T130458Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/outputs From build.starlingx at gmail.com Tue Jun 16 22:49:37 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 16 Jun 2020 18:49:37 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! In-Reply-To: <193648625.1702.1592309838134.JavaMail.javamailuser@localhost> References: <193648625.1702.1592309838134.JavaMail.javamailuser@localhost> Message-ID: <39783893.1714.1592347778601.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 9 Status: Still Failing Timestamp: 20200616T130458Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From zhipengs.liu at intel.com Wed Jun 17 00:07:50 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 17 Jun 2020 00:07:50 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! In-Reply-To: <39783893.1714.1592347778601.JavaMail.javamailuser@localhost> References: <193648625.1702.1592309838134.JavaMail.javamailuser@localhost> <39783893.1714.1592347778601.JavaMail.javamailuser@localhost> Message-ID: Hi scott, I have checked logs, base image, wheel build pass. For 13 openstack images, 5/13 failed, we will check further in our local setup today( We didn't see below errors during our local build) 1) nova, glance, heat ERROR: Could not find a version that satisfies the requirement networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, 2.4rc2, 2.4) ERROR: No matching distribution found for networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) 2) gnocchi ERROR: Could not find a version that satisfies the requirement uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) (from versions: none) ERROR: No matching distribution found for uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 3) cinder ERROR: Could not find a version that satisfies the requirement google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) ERROR: No matching distribution found for google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 Thanks! Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月17日 6:50 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Project: STX_build_master_ussuri Build #: 9 Status: Still Failing Timestamp: 20200616T130458Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From zhipengs.liu at intel.com Wed Jun 17 02:13:25 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 17 Jun 2020 02:13:25 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! In-Reply-To: References: <193648625.1702.1592309838134.JavaMail.javamailuser@localhost> <39783893.1714.1592347778601.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, Root cause found. It is strange, need you to further check your build script. For your base image build, I can see that [1] should be merged from below log. ------------------------------------------------ Step 5/5 : RUN set -ex ; sed -i '/\[main\]/ atimeout=120' /etc/yum.conf ; mv /stx.repo /etc/yum.repos.d/ ; yum upgrade --disablerepo=* ${REPO_OPTS} -y ; yum install --disablerepo=* ${REPO_OPTS} -y qemu-img openssh-clients python3 python3-pip python3-wheel rh-python36-mod_wsgi ; rm -rf /var/log/* /tmp/* /var/tmp/* ------------------------------------------------ But from openstack service image build log It seems not merged patche [1] So, it still uses train upper-constraints to build openstack image, see one log [2] https://raw.githubusercontent.com/openstack/requirements/stable/train/upper-constraints.txt [1] https://review.opendev.org/#/c/712880/ [2]http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs/docker-images/docker-stx-cinder-centos-stable.log -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 8:08 To: 'build.starlingx at gmail.com' ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Hi scott, I have checked logs, base image, wheel build pass. For 13 openstack images, 5/13 failed, we will check further in our local setup today( We didn't see below errors during our local build) 1) nova, glance, heat ERROR: Could not find a version that satisfies the requirement networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, 2.4rc2, 2.4) ERROR: No matching distribution found for networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) 2) gnocchi ERROR: Could not find a version that satisfies the requirement uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) (from versions: none) ERROR: No matching distribution found for uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 3) cinder ERROR: Could not find a version that satisfies the requirement google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) ERROR: No matching distribution found for google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 Thanks! Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月17日 6:50 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Project: STX_build_master_ussuri Build #: 9 Status: Still Failing Timestamp: 20200616T130458Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From yong.hu at intel.com Wed Jun 17 02:23:58 2020 From: yong.hu at intel.com (Hu, Yong) Date: Wed, 17 Jun 2020 02:23:58 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> Message-ID: <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> Thank Frank for this +1. We need another +1 from other cores. Regards, Yong On 2020/6/17, 4:22 AM, "Miller, Frank" wrote: I think it makes sense to add 1 prime to existing repos. As I see a lot of commits and review comments and emails on the mailing list from Zhipeng I would think it makes the most sense to add Zhipeng as a Core reviewer to the openstack-armada-app and stx-upstream repos. Frank -----Original Message----- From: Hu, Yong Sent: Tuesday, June 09, 2020 11:32 PM To: starlingx Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks! Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Wed Jun 17 05:14:34 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 17 Jun 2020 05:14:34 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! In-Reply-To: References: <193648625.1702.1592309838134.JavaMail.javamailuser@localhost> <39783893.1714.1592347778601.JavaMail.javamailuser@localhost> Message-ID: Sorry for typo, below "merge" should be "cherry-pick" -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 10:13 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Hi Scott, Root cause found. It is strange, need you to further check your build script. For your base image build, I can see that [1] should be cherry-picked from below log. ------------------------------------------------ Step 5/5 : RUN set -ex ; sed -i '/\[main\]/ atimeout=120' /etc/yum.conf ; mv /stx.repo /etc/yum.repos.d/ ; yum upgrade --disablerepo=* ${REPO_OPTS} -y ; yum install --disablerepo=* ${REPO_OPTS} -y qemu-img openssh-clients python3 python3-pip python3-wheel rh-python36-mod_wsgi ; rm -rf /var/log/* /tmp/* /var/tmp/* ------------------------------------------------ But from openstack service image build log It seems not cherry-pick patch [1] So, it still uses train upper-constraints to build openstack image, see one log [2] https://raw.githubusercontent.com/openstack/requirements/stable/train/upper-constraints.txt [1] https://review.opendev.org/#/c/712880/ [2]http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs/docker-images/docker-stx-cinder-centos-stable.log Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 8:08 To: 'build.starlingx at gmail.com' ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Hi scott, I have checked logs, base image, wheel build pass. For 13 openstack images, 5/13 failed, we will check further in our local setup today( We didn't see below errors during our local build) 1) nova, glance, heat ERROR: Could not find a version that satisfies the requirement networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, 2.4rc2, 2.4) ERROR: No matching distribution found for networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) 2) gnocchi ERROR: Could not find a version that satisfies the requirement uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) (from versions: none) ERROR: No matching distribution found for uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 3) cinder ERROR: Could not find a version that satisfies the requirement google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) ERROR: No matching distribution found for google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 Thanks! Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月17日 6:50 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Project: STX_build_master_ussuri Build #: 9 Status: Still Failing Timestamp: 20200616T130458Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Wed Jun 17 12:22:12 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 17 Jun 2020 12:22:12 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 17, 2020) Message-ID: Hi all, reminder of the TSC/Community call coming up later today. Please feel free to add items to the agenda [0] for the Community call beforehand. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200617T1400 From austin.sun at intel.com Wed Jun 17 13:28:10 2020 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 17 Jun 2020 13:28:10 +0000 Subject: [Starlingx-discuss] MoM: Weekly StarlingX non-OpenStack distro meeting, 6/17/2020 Message-ID: Hi All: Thanks join the call. MoM for 6/17 meeting: - Auto version integ/kernel repo: https://storyboard.openstack.org/#!/story/2007750 https://review.opendev.org/#/q/topic:pkg-versioning+(status:open+OR+status:merged) https://review.opendev.org/#/c/733459/ is ready for review. - ceph containerization: martin has uploaded new patch to support B&R(no-openstack, only platform) , has tested Simplex and Duplex B&R. multi is under tested. the plan will be create feature branch rook after 4.0 release branch. - centos8 and python3 logmgmt --get merged. the first package running python3 on master centos8: --- "no python2 needed in spec files" --- There are 15 packages no el8 version, need built from source or find alternative packages. --- no available modular metadata for modular package. --- still debugging test enable modular data on repodata , but not fix the issue. need further investigation - bugs: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other https://bugs.launchpad.net/starlingx/+bug/1882172 , it's random issue. based on "20200614T080013Z" , can't reproduce in 2*100000 times. Thanks. BR Austin Sun. From yang.liu at windriver.com Tue Jun 16 15:34:36 2020 From: yang.liu at windriver.com (Liu, Yang (YOW)) Date: Tue, 16 Jun 2020 15:34:36 +0000 Subject: [Starlingx-discuss] [ Test ] meeting notes - 06/16/2020 Message-ID: Agenda for 6/16/2020 Attendees: Yang, Ruediger, GeorgeP, Mihail, Alex, Oliver · Sanity Status: * Build from 0615 is green. Today's load is also looking good so far. * There was a discussion in vPTG to add force reboot controller test into sanity - not added yet. Test case is ready, will integrate after fix is merged. * GeorgeP observed that cert-manager apply-failed - check /var/log/armada/ and report LP · stx4.0 testing: * Feature testing in progress: § https://docs.google.com/spreadsheets/d/1C9n4aRQT7xMyTDCT5sfuZGNI9ermAX5BYRypzcCpQ6U/edit#gid=0 § Centos8: · 1 test case should obsolete - George will take action · George will open LP on one last test case § TSN · Setup completed · Testing in progress and unblocked § Upgrade Containerized OpenStack to Ussuri (and OpenStack helm rebase) · Planned sanity is completed. IPv6 is not covered. · Suggest to cover all openstack components as a minimum - e.g., nova, neutron, cinder, telemetry, glance, heat, etc · Merging is in progress § Kata Containers · Kata container test completed · Nicolae to check with designer on this issue: "Check PID namespaces" - Will confirm with Nic in next meeting * Feature test completed or dropped from stx4.0: § Upversion Openstack services used by flock components on host: · Test completed - feature spreadsheet updated. § Windows Active directory completed · Testing is completed with small add-on to support multiple-dex § Red fish virual media support - testing completed § Kubernetes Upgrade Support · Completed - feature spreadsheet updated. § B&R with etcd database · Feature testing completed · Weekly based regresssion is done - simplex system is passing § Ceph - Rook - taken out from stx 4.0 - test activities paused * Regression testing: § Regression started - Both teams are making progress. · OpenStack regression - waiting for Ussuri to repeat. · Will continue with Regression with latest green load § George: cannot launch vm in virtual env with hugepage - need to check system host-memory-list , and ensure 2M or 1G pages are allocated to application cores. § Stability improved · George will check force reboot test on latest load, and regression can move to new load if stability improved § Telemetry test cases now passing on train · Open topic * Test automation § Need automated installation script · Robot framwork installation scripts are used in daily sanity with basic setup and provisioning - needs libvert and qemu installed o https://opendev.org/starlingx/test/src/branch/master/automated-robot-suite o Nic: Look into adding to Docs/Wiki after stx4.0. § Nic's team will publish regression test cases on github - not a priority right now · https://github.com/starlingx-staging/robot-tests § After stx4.0, put more effort on test automation · Nic: move some robot regression to pytest · Yang: automate new feature test cases * Test in the open § Yang: coming up with VM requirement to send to opendev · Will discuss with networking expert for interface requirement -------------- next part -------------- An HTML attachment was scrubbed... URL: From Qi.Chen at windriver.com Wed Jun 17 08:41:04 2020 From: Qi.Chen at windriver.com (ChenQi) Date: Wed, 17 Jun 2020 16:41:04 +0800 Subject: [Starlingx-discuss] Quick Start Failure -- AIO Simplex Message-ID: <94cec056-3b5c-9cde-7802-724aa1e7bd80@windriver.com> Hi All, I followed https://docs.starlingx.io/deploy_install_guides/r3_release/virtual/aio_simplex_install_kubernetes.html as a quick start, and got the following error. *Console output:* Installing libndp (1119/1146) Installing iperf3 (1120/1146) Installing docker-forward-journald (1121/1146) Installing namespace-utils (1122/1146) Installing libestr (1123/1146) Installing setup-config (1124/1146) Installing collector (1125/1146) Installing mlx4-config (1126/1146) Installing helm (1127/1146) ================================================================================ ================================================================================ Error The following error occurred while installing. This is a fatal error and installation will be aborted. process '['/usr/libexec/anaconda/anaconda-yum', '--config', '/tmp/anaconda- yum.conf', '--tsfile', '/mnt/sysimage/anaconda-yum.yumtx', '--rpmlog', '/tmp /rpm-script.log', '--installroot', '/mnt/sysimage', '--release', '7', '--arch', 'x86_64', '--macro', '__dbi_htconfig', 'hash nofsync %{__dbi_other} %{__dbi_perms}', '--macro', '__file_context_path', '/etc/selinux/targeted/contexts/files/file_contexts']' exited with status 1 Press enter to exit. *journalctl --no-pager output* Jun 17 08:18:23 localhost packaging[3121]: Installing nfv-vim (1098/1146) Jun 17 08:18:27 localhost packaging[3121]: File "/usr/libexec/anaconda/anaconda-yum", line 336, in inst_open_file Jun 17 08:18:27 localhost packaging[3121]: os.unlink(txmbr.po.localPkg()) Jun 17 08:18:27 localhost packaging[3121]: OSError: [Errno 30] Read-only file system: '/run/install/repo/Packages/helm-2.13.1-0.tis.2.x86_64.rpm' Jun 17 08:18:27 localhost packaging[3121]: FATAL ERROR: python callback > failed, aborting! Jun 17 08:18:27 localhost packaging[3121]: Error running anaconda-yum: process '['/usr/libexec/anaconda/anaconda-yum', '--config', '/tmp/anaconda-yum.conf', '--tsfile', '/mnt/sysimage/anaconda-yum.yumtx', '--rpmlog', '/tmp/rpm-script.log', '--installroot', '/mnt/sysimage', '--release', '7', '--arch', 'x86_64', '--macro', '__dbi_htconfig', 'hash nofsync %{__dbi_other} %{__dbi_perms}', '--macro', '__file_context_path', '/etc/selinux/targeted/contexts/files/file_contexts']' exited with status 1 *Check /run/install/repo* [anaconda root at localhost ~]# mount | grep repo /dev/sr0 on /run/install/repo type iso9660 (ro,relatime) Does someone have any idea why it's failing? Best Regards, Chen Qi -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jun 17 14:54:54 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 17 Jun 2020 14:54:54 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 17, 2020) In-Reply-To: References: Message-ID: >From today... * Standing Topics * Sanity * last 3 sanities were green * Gerrit Reviews in Need of Attention * keep reviewing the Ussuri changes https://review.opendev.org/#/q/topic:for_ussuri+(status:open) * Topics for this Week * Discourse: * http://lists.starlingx.io/pipermail/starlingx-discuss/2020-June/008846.html * general agreement that we need to have as few as possible comms tools * Saul mentioned this tool for IRC: webchat.freenode.org/#starlingx * Ussuri Build: Scott will work with Saul/Zhipeng on git to try to unblock the build * ARs from Previous Meetings * nothing new this week, Build items will only be mentioned here as they are closed * Open Requests for Help * didn't review this week -----Original Message----- From: Zvonar, Bill Sent: Wednesday, June 17, 2020 8:22 AM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (June 17, 2020) Hi all, reminder of the TSC/Community call coming up later today. Please feel free to add items to the agenda [0] for the Community call beforehand. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200617T1400 From build.starlingx at gmail.com Wed Jun 17 15:27:33 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Jun 2020 11:27:33 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 11 - Failure! Message-ID: <151222341.1721.1592407654585.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 11 Status: Failure Timestamp: 20200617T150550Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200617T150550Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From sgw at linux.intel.com Wed Jun 17 15:57:12 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 17 Jun 2020 08:57:12 -0700 Subject: [Starlingx-discuss] Multi-OS Sub-project meeting Message-ID: <559336af-9267-fa16-19ff-263d26834130@linux.intel.com> Folks, We will meeting Thursday morning at 7:30 (PT), 13:30 (UTC) after the build team meeting. Agenda: - Status update from team working on OpenEmbedded Layer. Sau! From zhipengs.liu at intel.com Wed Jun 17 16:39:55 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 17 Jun 2020 16:39:55 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! In-Reply-To: References: <193648625.1702.1592309838134.JavaMail.javamailuser@localhost> <39783893.1714.1592347778601.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, I found the issue here. From log http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs/jenkins-STX_build_wheels-230.log.html The script put a wrong upper-constraint file to below tarball. Running: wget https://opendev.org/openstack/requirements/raw/commit/2da5c5045118b0e36fb14427872e4b9b37335071/upper-constraints.txt This was introduced by patch https://review.opendev.org/#/c/708766/ # Locking down constraints as a temporary fix for build issues, until images move to python3 - LP: 1863957 #with_retries ${MAX_ATTEMPTS} wget https://raw.githubusercontent.com/openstack/requirements/${OPENSTACK_BRANCH}/upper-constraints.txt with_retries ${MAX_ATTEMPTS} wget https://opendev.org/openstack/requirements/raw/commit/2da5c5045118b0e36fb14427872e4b9b37335071/upper-constraints.txt Now we need change it back, since we move openstack to ussuri which support python3 only now. I also got double confirm from Chant that he changed here manually in his build environment, but not submitted it to below patch. I have updated patch https://review.opendev.org/#/c/712880/ @Scott Little Please help cherry pick the latest update and retrigger the build again. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 13:15 To: Liu, ZhipengS ; Scott Little ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Sorry for typo, below "merge" should be "cherry-pick" -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 10:13 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Hi Scott, Root cause found. It is strange, need you to further check your build script. For your base image build, I can see that [1] should be cherry-picked from below log. ------------------------------------------------ Step 5/5 : RUN set -ex ; sed -i '/\[main\]/ atimeout=120' /etc/yum.conf ; mv /stx.repo /etc/yum.repos.d/ ; yum upgrade --disablerepo=* ${REPO_OPTS} -y ; yum install --disablerepo=* ${REPO_OPTS} -y qemu-img openssh-clients python3 python3-pip python3-wheel rh-python36-mod_wsgi ; rm -rf /var/log/* /tmp/* /var/tmp/* ------------------------------------------------ But from openstack service image build log It seems not cherry-pick patch [1] So, it still uses train upper-constraints to build openstack image, see one log [2] https://raw.githubusercontent.com/openstack/requirements/stable/train/upper-constraints.txt [1] https://review.opendev.org/#/c/712880/ [2]http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs/docker-images/docker-stx-cinder-centos-stable.log Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 8:08 To: 'build.starlingx at gmail.com' ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Hi scott, I have checked logs, base image, wheel build pass. For 13 openstack images, 5/13 failed, we will check further in our local setup today( We didn't see below errors during our local build) 1) nova, glance, heat ERROR: Could not find a version that satisfies the requirement networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, 2.4rc2, 2.4) ERROR: No matching distribution found for networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) 2) gnocchi ERROR: Could not find a version that satisfies the requirement uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) (from versions: none) ERROR: No matching distribution found for uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 3) cinder ERROR: Could not find a version that satisfies the requirement google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) ERROR: No matching distribution found for google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 Thanks! Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月17日 6:50 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! Project: STX_build_master_ussuri Build #: 9 Status: Still Failing Timestamp: 20200616T130458Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bnovickovs at gmail.com Wed Jun 17 13:58:11 2020 From: bnovickovs at gmail.com (Boriss N) Date: Wed, 17 Jun 2020 14:58:11 +0100 Subject: [Starlingx-discuss] Questions related to Startlingx & Openstack Message-ID: Hello, First of all, I would like to say thank you very much for your effort in this amazing project, and congratulations for being a top-level project at Openstack Foundation. I have been testing StarlingX for 5 days now and wanted to ask / specify some questions of mine related to Starlingx in general and Openstack deployment on it in particular. StarlingX: 1. I know that the 4th release is upcoming in July. Also, from what I've heard in StarlingX irc channel, you can't upgrade from version 3 to version 4. Thus, I am wondering if I deploy the 4th version, would I be able to upgrade it in the future with up-to-date releases? 2. I really enjoy StarlingX management UI. However, I can see that it's not fully-functional yet. I am wondering, if Host Inventory will be upgraded so the host can be added via UI? Is Software Management can be used for patching anything within infrastructure and StarlingX related? 3. Since Kubernetes is using docker, and docker is highly dependent on Linux kernel; and as far as I know, it is the best practice to use newer versions of kernel with docker, do you have any plans of upgrading kernel version til 5.x version? Openstack: 1. I've been working a lot with Mirantis MOSK (Openstack on Kubernetes) https://docs.mirantis.com/mosk/beta/index.html & https://github.com/Mirantis/release-openstack-k8s ; which uses operator's mechanism for both openstack and ceph. Their package is very nice, still in beta though, but it is consuming a lot more resources compared to Starlingx package. However, mosk provides designate/octavia/cells2 services by default. Thus, I am wondering if the StarlingX Openstack package can offer additional services deployment if required, like octavia or magnum? 2. In documentation it is specified that having an additional disk for Openstack nova-local ephemeral images is a good practice. However, I could not figure out how to use a separate disk for that since it's not specified anywhere in the docs. Default procedure is to use a root disk (screenshot: https://prnt.sc/t1fqwr). Therefore, how can I use a separate disk for nova-local images and avoid having those errors in openstack when I am creating an instance without volume (screenshot: https://prnt.sc/t1fs41). 3. I had a similar issue at the start https://bugs.launchpad.net/starlingx/+bug/1867731 ; However, I am a bit confused why you would use services externally, using external FQDN, while you use horizon. When setup external dns for openstack services and open horizon UI, it works very slowly. If you create volume in horizon and create volume using internal CLI, its totally different speeds but it should not be like that. I am wondering, if Horizon can be used while all the services will be connected and used internally? That is what I can see using MOSK, and Horizon UI of theirs is working very fast. 4. I think you are aware of it, but you can't upload images (glance sends python errors) using horizon ui therefore, only internal CLI can be used for that. 5. I would really appreciate if you can help me with troubleshooting this: https://bugs.launchpad.net/starlingx/+bug/1883911 Thank you very much and regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnovickovs at weecodelab.com Wed Jun 17 17:14:12 2020 From: bnovickovs at weecodelab.com (bnovickovs at weecodelab.com) Date: Wed, 17 Jun 2020 18:14:12 +0100 Subject: [Starlingx-discuss] Questions related to Startlingx & Openstack Message-ID: <75cfc4f7649862091629d1ce7925eb18@weecodelab.com> Hello, First of all, I would like to say thank you very much for your effort in this amazing project, and congratulations for being a top-level project at Openstack Foundation. I have been testing StarlingX for 5 days now and wanted to ask / specify some questions of mine related to Starlingx in general and Openstack deployment on it in particular. StarlingX: 1. I know that the 4th release is upcoming in July. Also, from what I've heard in StarlingX irc channel, you can't upgrade from version 3 to version 4. Thus, I am wondering if I deploy the 4th version, would I be able to upgrade it in the future with up-to-date releases? 2. I really enjoy StarlingX management UI. However, I can see that it's not fully-functional yet. I am wondering, if Host Inventory will be upgraded so the host can be added via UI? Is Software Management can be used for patching anything within infrastructure and StarlingX related? 3. Since Kubernetes is using docker, and docker is highly dependent on Linux kernel; and as far as I know, it is the best practice to use newer versions of kernel with docker, do you have any plans of upgrading kernel version til 5.x version? Openstack: 1. I've been working a lot with Mirantis MOSK (Openstack on Kubernetes) https://docs.mirantis.com/mosk/beta/index.html & https://github.com/Mirantis/release-openstack-k8s ; which uses operator's mechanism for both openstack and ceph. Their package is very nice, still in beta though, but it is consuming a lot more resources compared to Starlingx package. However, mosk provides designate/octavia/cells2 services by default. Thus, I am wondering if the StarlingX Openstack package can offer additional services deployment if required, like octavia or magnum? 2. In documentation it is specified that having an additional disk for Openstack nova-local ephemeral images is a good practice. However, I could not figure out how to use a separate disk for that since it's not specified anywhere in the docs. Default procedure is to use a root disk (screenshot: https://prnt.sc/t1fqwr). Therefore, how can I use a separate disk for nova-local images and avoid having those errors in openstack when I am creating an instance without volume (screenshot: https://prnt.sc/t1fs41). 3. I had a similar issue at the start https://bugs.launchpad.net/starlingx/+bug/1867731 ; However, I am a bit confused why you would use services externally, using external FQDN, while you use horizon. When setup external dns for openstack services and open horizon UI, it works very slowly. If you create volume in horizon and create volume using internal CLI, its totally different speeds but it should not be like that. I am wondering, if Horizon can be used while all the services will be connected and used internally? That is what I can see using MOSK, and Horizon UI of theirs is working very fast. 4. I think you are aware of it, but you can't upload images (glance sends python errors) using horizon ui therefore, only internal CLI can be used for that. 5. I would really appreciate if you can help me with troubleshooting this: https://bugs.launchpad.net/starlingx/+bug/1883911 Thank you very much and regards From alexandru.dimofte at intel.com Wed Jun 17 17:32:53 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 17 Jun 2020 17:32:53 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200617T021347Z Message-ID: Sanity Test from 2020-June-17 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200617T021347Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200617T021347Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D644E6.76E64E60] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20507 bytes Desc: image003.png URL: From scott.little at windriver.com Wed Jun 17 18:30:18 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 17 Jun 2020 14:30:18 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! In-Reply-To: References: <193648625.1702.1592309838134.JavaMail.javamailuser@localhost> <39783893.1714.1592347778601.JavaMail.javamailuser@localhost> Message-ID: <9f37ac09-7e18-c1be-0b19-3c14468029af@windriver.com> will do On 2020-06-17 12:39 p.m., Liu, ZhipengS wrote: > Hi Scott, > > I found the issue here. > > From log http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs/jenkins-STX_build_wheels-230.log.html > The script put a wrong upper-constraint file to below tarball. > Running: wget https://opendev.org/openstack/requirements/raw/commit/2da5c5045118b0e36fb14427872e4b9b37335071/upper-constraints.txt > > This was introduced by patch https://review.opendev.org/#/c/708766/ > # Locking down constraints as a temporary fix for build issues, until images move to python3 - LP: 1863957 > #with_retries ${MAX_ATTEMPTS} wget https://raw.githubusercontent.com/openstack/requirements/${OPENSTACK_BRANCH}/upper-constraints.txt > with_retries ${MAX_ATTEMPTS} wget https://opendev.org/openstack/requirements/raw/commit/2da5c5045118b0e36fb14427872e4b9b37335071/upper-constraints.txt > > Now we need change it back, since we move openstack to ussuri which support python3 only now. > I also got double confirm from Chant that he changed here manually in his build environment, but not submitted it to below patch. > > I have updated patch > https://review.opendev.org/#/c/712880/ > @Scott Little Please help cherry pick the latest update and retrigger the build again. > > Thanks! > Zhipeng > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月17日 13:15 > To: Liu, ZhipengS ; Scott Little ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! > > Sorry for typo, below "merge" should be "cherry-pick" > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月17日 10:13 > To: Scott Little ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! > > Hi Scott, > > Root cause found. > It is strange, need you to further check your build script. > > For your base image build, I can see that [1] should be cherry-picked from below log. > ------------------------------------------------ > Step 5/5 : RUN set -ex ; sed -i '/\[main\]/ atimeout=120' /etc/yum.conf ; mv /stx.repo /etc/yum.repos.d/ ; yum upgrade --disablerepo=* ${REPO_OPTS} -y ; yum install --disablerepo=* ${REPO_OPTS} -y qemu-img openssh-clients python3 python3-pip python3-wheel rh-python36-mod_wsgi ; rm -rf /var/log/* /tmp/* /var/tmp/* > ------------------------------------------------ > > But from openstack service image build log It seems not cherry-pick patch [1] So, it still uses train upper-constraints to build openstack image, see one log [2] https://raw.githubusercontent.com/openstack/requirements/stable/train/upper-constraints.txt > > > [1] https://review.opendev.org/#/c/712880/ > [2]http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs/docker-images/docker-stx-cinder-centos-stable.log > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月17日 8:08 > To: 'build.starlingx at gmail.com' ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! > > Hi scott, > > I have checked logs, base image, wheel build pass. > For 13 openstack images, 5/13 failed, we will check further in our local setup today( We didn't see below errors during our local build) > > 1) nova, glance, heat > ERROR: Could not find a version that satisfies the requirement networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) (from versions: 1.9rc1, 1.9, 1.9.1, 1.10rc2, 1.11rc2, 1.11, 2.2, 2.4rc2, 2.4) > ERROR: No matching distribution found for networkx===2.3 (from -c /tmp/wheels/upper-constraints.txt (line 101)) > 2) gnocchi > ERROR: Could not find a version that satisfies the requirement uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) (from versions: none) > ERROR: No matching distribution found for uWSGI===2.0.17.1 (from -c /tmp/wheels/upper-constraints.txt (line 705)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 > 3) cinder > ERROR: Could not find a version that satisfies the requirement google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) > ERROR: No matching distribution found for google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 > > Thanks! > Zhipeng > > -----Original Message----- > From: build.starlingx at gmail.com > Sent: 2020年6月17日 6:50 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 9 - Still Failing! > > Project: STX_build_master_ussuri > Build #: 9 > Status: Still Failing > Timestamp: 20200616T130458Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200616T130458Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Wed Jun 17 18:55:25 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Jun 2020 14:55:25 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 12 - Still Failing! In-Reply-To: <769414093.1719.1592407652504.JavaMail.javamailuser@localhost> References: <769414093.1719.1592407652504.JavaMail.javamailuser@localhost> Message-ID: <1013296182.1725.1592420125902.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 12 Status: Still Failing Timestamp: 20200617T185306Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200617T185306Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From build.starlingx at gmail.com Wed Jun 17 20:27:17 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Jun 2020 16:27:17 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 13 - Still Failing! In-Reply-To: <1027399680.1723.1592420124300.JavaMail.javamailuser@localhost> References: <1027399680.1723.1592420124300.JavaMail.javamailuser@localhost> Message-ID: <756114219.1729.1592425638041.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 13 Status: Still Failing Timestamp: 20200617T202536Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200617T202536Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From scott.little at windriver.com Wed Jun 17 20:27:59 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 17 Jun 2020 16:27:59 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 12 - Still Failing! In-Reply-To: <1013296182.1725.1592420125902.JavaMail.javamailuser@localhost> References: <769414093.1719.1592407652504.JavaMail.javamailuser@localhost> <1013296182.1725.1592420125902.JavaMail.javamailuser@localhost> Message-ID: <33e55714-dd9f-1625-0d24-bb08593aa420@windriver.com> Odd failure processing the Dockerfile. Trying a fix.  Restarting ussuri build ... Scott On 2020-06-17 2:55 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_ussuri > Build #: 12 > Status: Still Failing > Timestamp: 20200617T185306Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200617T185306Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Jun 17 20:44:01 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Jun 2020 16:44:01 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 14 - Still Failing! In-Reply-To: <559536545.1727.1592425636172.JavaMail.javamailuser@localhost> References: <559536545.1727.1592425636172.JavaMail.javamailuser@localhost> Message-ID: <1491336408.1733.1592426642245.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 14 Status: Still Failing Timestamp: 20200617T204241Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200617T204241Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From build.starlingx at gmail.com Wed Jun 17 22:26:17 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Jun 2020 18:26:17 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 15 - Still Failing! In-Reply-To: <945146211.1731.1592426640488.JavaMail.javamailuser@localhost> References: <945146211.1731.1592426640488.JavaMail.javamailuser@localhost> Message-ID: <1622032793.1737.1592432780833.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 15 Status: Still Failing Timestamp: 20200617T220616Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200617T220616Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From poornima.y.n at intel.com Thu Jun 18 00:47:20 2020 From: poornima.y.n at intel.com (N, Poornima Y) Date: Thu, 18 Jun 2020 00:47:20 +0000 Subject: [Starlingx-discuss] Build error in stx 3.0 Message-ID: Hi all, I'm facing build issue on StarlingX 3.0. Below are the steps I have sync the code using below repo command: repo init -u https://opendev.org/starlingx/manifest -m default.xml -b r/stx.3.0 I get Missing targets for below src, when give the command generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/: Missing targets: ntp-4.2.6p5-28.el7.centos.src.rpm systemd-219-62.el7_6.5.src.rpm When I look into stx-tools/centos-mirror-tools/rpms_centos.lst file, the following src are mentioned: systemd-219-67.el7.src.rpm ntp-4.2.6p5-29.el7.centos.src.rpm Notice that, there is a mismatch in the version! If I proceed with build-pkgs following is the error I get: 17:30:55 ============ Build failed ============= 17:30:55 b5: ERROR: build_dir (417): Invalid srpm path 'mirror:Source/systemd-219-67.el7.src.rpm', evaluated as '/localdisk/designer/pyn/stx/cgcs-root/cgcs-centos-repo/Source/systemd-219-67.el7.src.rpm', found in '/localdisk/designer/pyn/stx/cgcs-root/stx/integ/base/systemd/centos/srpm_path' 17:30:55 ERROR: reaper (1304): Failed to build src.rpm from source at 'b5' 17:30:55 Can anyone point me as to how to resolve this error?. Am I missing anything? Thanks and Regards, Poornima Y N -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Thu Jun 18 01:18:29 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Thu, 18 Jun 2020 01:18:29 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: , Message-ID: Hi Dan I check restore for latest code, restore will fail with such log. I used to check code base Jun 5 master branch, no such issue. You know about this? TASK [bootstrap/bringup-essential-services : Create Armada node label] ********************************************************************************************************************** fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubectl", "label", "node", "controller-0", "armada=enabled"], "delta": "0:00:00.102152", "end": "2020-06-18 00:57:32.563552", "msg": "non-zero return code", "rc": 1, "start": "2020-06-18 00:57:32.461400", "stderr": "error: 'armada' already has a value (enabled), and --overwrite is false", "stderr_lines": ["error: 'armada' already has a value (enabled), and --overwrite is false"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************************************************************************************************************************** localhost : ok=354 changed=156 unreachable=0 failed=1 [sysadmin at controller-0 ~(keystone_admin)]$ BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Thursday, June 11, 2020 10:53 AM To: Voiculeasa, Dan ; starlingx-discuss at lists.starlingx.io Subject: RE: issue for backup and restore Hi voiculeasa I confirm backup and restore works without ceph backend. This issue is caused with my improper provision step. BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 9, 2020 5:54 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello Martin, I didn't encounter that issue when testing, but also I didn't test recently without ceph backend. Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph? Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Monday, June 8, 2020 5:21 AM To: Voiculeasa, Dan >; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jun 18 01:36:04 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Jun 2020 21:36:04 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 147 - Failure! Message-ID: <1546286768.1741.1592444165215.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 147 Status: Failure Timestamp: 20200618T013428Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200618T013428Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Thu Jun 18 03:04:14 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 17 Jun 2020 23:04:14 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 16 - Still Failing! In-Reply-To: <1165681829.1735.1592432775717.JavaMail.javamailuser@localhost> References: <1165681829.1735.1592432775717.JavaMail.javamailuser@localhost> Message-ID: <1481043145.1745.1592449455436.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 16 Status: Still Failing Timestamp: 20200618T024503Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T024503Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From scott.little at windriver.com Thu Jun 18 03:12:04 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 17 Jun 2020 23:12:04 -0400 Subject: [Starlingx-discuss] Build error in stx 3.0 In-Reply-To: References: Message-ID: <0dd8105b-db7d-6363-8ac4-17e4aced38fa@windriver.com> Is your MY_REPO_ROOT_DIR environment variable pointing to the right place ? generate-cgcs-centos-repo.sh should be creating a repo based on the content of $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools/rpms_centos.lst . Scott On 2020-06-17 8:47 p.m., N, Poornima Y wrote: > > Hi all, > > I’m facing build issue on StarlingX 3.0. Below are the steps > > I have sync the code using below repo command: > > *repo init -u https://opendev.org/starlingx/manifest -m default.xml -b > r/stx.3.0* > > ** > > I get Missing targets for below src, when give the command > *generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/*: > > *Missing targets:* > > *ntp-4.2.6p5-28.el7.centos.src.rpm* > > *systemd-219-62.el7_6.5.src.rpm* > > ** > > When I  look into stx-tools/centos-mirror-tools/rpms_centos.lst file, > the following src are mentioned: > > *systemd-219-67.el7.src.rpm* > > *ntp-4.2.6p5-29.el7.centos.src.rpm* > > ** > > *Notice that, there is a mismatch in the version!* > > ** > > If I proceed with build-pkgs following is the error I get*:* > > *17:30:55 ============ Build failed =============** > 17:30:55 b5: ERROR: build_dir (417): Invalid srpm path > 'mirror:Source/systemd-219-67.el7.src.rpm', evaluated as > '/localdisk/designer/pyn/stx/cgcs-root/cgcs-centos-repo/Source/systemd-219-67.el7.src.rpm', > found in > '/localdisk/designer/pyn/stx/cgcs-root/stx/integ/base/systemd/centos/srpm_path' > 17:30:55 ERROR: reaper (1304): Failed to build src.rpm from source at 'b5' > 17:30:55* > > ** > > Can anyone point me as to how to resolve this error?. Am I missing > anything? > > ** > > *Thanks and Regards,* > > *Poornima Y N* > > ** > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Jun 18 03:14:34 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 17 Jun 2020 23:14:34 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 16 - Still Failing! In-Reply-To: <1481043145.1745.1592449455436.JavaMail.javamailuser@localhost> References: <1165681829.1735.1592432775717.JavaMail.javamailuser@localhost> <1481043145.1745.1592449455436.JavaMail.javamailuser@localhost> Message-ID: <9453e117-c4d5-b50a-1a32-98d572b02297@windriver.com> ok, figured it out. Our source for mock-1.4.19 was no longer valid. Scott On 2020-06-17 11:04 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_ussuri > Build #: 16 > Status: Still Failing > Timestamp: 20200618T024503Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T024503Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jun 18 04:23:36 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 18 Jun 2020 00:23:36 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 445 - Failure! Message-ID: <1684229081.1750.1592454216817.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 445 Status: Failure Timestamp: 20200618T041534Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200618T040258Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200618T040258Z DOCKER_BUILD_ID: jenkins-master-flock-20200618T040258Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200618T040258Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200618T040258Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Thu Jun 18 04:23:38 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 18 Jun 2020 00:23:38 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 148 - Still Failing! In-Reply-To: <1256053827.1739.1592444163500.JavaMail.javamailuser@localhost> References: <1256053827.1739.1592444163500.JavaMail.javamailuser@localhost> Message-ID: <1696506994.1753.1592454218842.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 148 Status: Still Failing Timestamp: 20200618T040258Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200618T040258Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Thu Jun 18 04:37:00 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 18 Jun 2020 00:37:00 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 750 - Failure! Message-ID: <1232866835.1756.1592455021181.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 750 Status: Failure Timestamp: 20200618T034718Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T031843Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200618T031843Z DOCKER_BUILD_ID: jenkins-ussuri-20200618T031843Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T031843Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200618T031843Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri From build.starlingx at gmail.com Thu Jun 18 04:37:02 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 18 Jun 2020 00:37:02 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 17 - Still Failing! In-Reply-To: <1235198539.1743.1592449453816.JavaMail.javamailuser@localhost> References: <1235198539.1743.1592449453816.JavaMail.javamailuser@localhost> Message-ID: <52195362.1759.1592455023240.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 17 Status: Still Failing Timestamp: 20200618T031843Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T031843Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From poornima.y.n at intel.com Thu Jun 18 04:38:22 2020 From: poornima.y.n at intel.com (N, Poornima Y) Date: Thu, 18 Jun 2020 04:38:22 +0000 Subject: [Starlingx-discuss] Build error in stx 3.0 In-Reply-To: <0dd8105b-db7d-6363-8ac4-17e4aced38fa@windriver.com> References: <0dd8105b-db7d-6363-8ac4-17e4aced38fa@windriver.com> Message-ID: Hi Scott, Yes, MY_REPO_ROOT_DIR is pointing to the right directory. Below is the output for echo $MY_REPO_ROOT_DIR with username as pyn: /localdisk/designer/pyn/stx Thanks, Poornima From: Scott Little Sent: Thursday, June 18, 2020 8:42 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Build error in stx 3.0 Is your MY_REPO_ROOT_DIR environment variable pointing to the right place ? generate-cgcs-centos-repo.sh should be creating a repo based on the content of $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools/rpms_centos.lst . Scott On 2020-06-17 8:47 p.m., N, Poornima Y wrote: Hi all, I'm facing build issue on StarlingX 3.0. Below are the steps I have sync the code using below repo command: repo init -u https://opendev.org/starlingx/manifest -m default.xml -b r/stx.3.0 I get Missing targets for below src, when give the command generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/: Missing targets: ntp-4.2.6p5-28.el7.centos.src.rpm systemd-219-62.el7_6.5.src.rpm When I look into stx-tools/centos-mirror-tools/rpms_centos.lst file, the following src are mentioned: systemd-219-67.el7.src.rpm ntp-4.2.6p5-29.el7.centos.src.rpm Notice that, there is a mismatch in the version! If I proceed with build-pkgs following is the error I get: 17:30:55 ============ Build failed ============= 17:30:55 b5: ERROR: build_dir (417): Invalid srpm path 'mirror:Source/systemd-219-67.el7.src.rpm', evaluated as '/localdisk/designer/pyn/stx/cgcs-root/cgcs-centos-repo/Source/systemd-219-67.el7.src.rpm', found in '/localdisk/designer/pyn/stx/cgcs-root/stx/integ/base/systemd/centos/srpm_path' 17:30:55 ERROR: reaper (1304): Failed to build src.rpm from source at 'b5' 17:30:55 Can anyone point me as to how to resolve this error?. Am I missing anything? Thanks and Regards, Poornima Y N _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jun 18 09:23:22 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 18 Jun 2020 05:23:22 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 751 - Still Failing! In-Reply-To: <2012010948.1754.1592455019304.JavaMail.javamailuser@localhost> References: <2012010948.1754.1592455019304.JavaMail.javamailuser@localhost> Message-ID: <1651898518.1762.1592472203495.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 751 Status: Still Failing Timestamp: 20200618T083239Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200618T080009Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20200618T080009Z DOCKER_BUILD_ID: jenkins-master-20200618T080009Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200618T080009Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200618T080009Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Thu Jun 18 09:23:25 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 18 Jun 2020 05:23:25 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 582 - Failure! Message-ID: <247749371.1765.1592472205703.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 582 Status: Failure Timestamp: 20200618T080009Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200618T080009Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From yatindra.shashi at intel.com Thu Jun 18 12:17:20 2020 From: yatindra.shashi at intel.com (Shashi, Yatindra) Date: Thu, 18 Jun 2020 12:17:20 +0000 Subject: [Starlingx-discuss] Unable to log in controller-1 after changing password on active controller-0 In-Reply-To: References: Message-ID: Hi All, Thanks Volker for suggestion , but issue got solved by using controller-0:~$ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=password -o PubkeyAuthentication=no sysadmin at controller-1 Regards, Yatindra Shashi IoTG DE From: von Hoesslin, Volker Sent: 16 June 2020 12:45 To: Shashi, Yatindra ; starlingx-discuss at lists.starlingx.io Subject: AW: Unable to log in controller-1 after changing password on active controller-0 try to re-install controller-1 ? Von: Shashi, Yatindra [yatindra.shashi at intel.com] Gesendet: Freitag, 12. Juni 2020 12:13 An: starlingx-discuss at lists.starlingx.io Betreff: [URL wurde verändert] [Starlingx-discuss] Unable to log in controller-1 after changing password on active controller-0 Externe E-Mail! Öffnen Sie nur Links oder Anhänge von vertrauenswürdigen Absendern! Hi All, In AIO-Duplex Setup 3.0 As after certain days Stx force user to change the Password, I changed the password in the Controller-0 but I did not do on the cont-1. I had locked/unlocked Cont-1 and tried to login with old/new password but I get access denied. Is there way to reset or change sysadmin Password of cont-1. I am able to login to dashboard and cont-0 with the password I had. Mit freundlichen Grüßen/ with best regards, Yatindra Shashi IoTG DE- Intel Corporation Munich, Germany P Save Paper, Go Digital :) Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, https://sis-schwerin.de/externer-link/?href=www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jun 18 13:22:56 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 18 Jun 2020 09:22:56 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 752 - Still Failing! In-Reply-To: <1927927128.1760.1592472201256.JavaMail.javamailuser@localhost> References: <1927927128.1760.1592472201256.JavaMail.javamailuser@localhost> Message-ID: <1478780075.1768.1592486577762.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 752 Status: Still Failing Timestamp: 20200618T123206Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T120010Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200618T120010Z DOCKER_BUILD_ID: jenkins-ussuri-20200618T120010Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T120010Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200618T120010Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri From build.starlingx at gmail.com Thu Jun 18 13:22:59 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 18 Jun 2020 09:22:59 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 18 - Still Failing! In-Reply-To: <403644384.1757.1592455021670.JavaMail.javamailuser@localhost> References: <403644384.1757.1592455021670.JavaMail.javamailuser@localhost> Message-ID: <486870161.1771.1592486580201.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 18 Status: Still Failing Timestamp: 20200618T120010Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T120010Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From ildiko.vancsa at gmail.com Thu Jun 18 14:22:01 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 18 Jun 2020 16:22:01 +0200 Subject: [Starlingx-discuss] Edge summaries from the virtual PTG Message-ID: <6AC4BD17-9C0D-4152-AFA3-E96073AE1336@gmail.com> Hi, It was great seeing many of you at the virtual PTG two weeks ago. I wrote up two blog posts to summarize edge related sessions at the event: * Edge Computing Group recap: https://superuser.openstack.org/articles/osf-edge-computing-group-ptg-overview/ * StarlingX recap: https://www.starlingx.io/blog/starlingx-vptg-june-2020-recap/ The articles have pointers to the etherpads we used at the event and include further pointers as well in case you would like to follow up on either discussion item or missed the event and would like to get a somewhat detailed view of what was discussed. Thanks and Best Regards, Ildikó From Davlet.Panech at windriver.com Thu Jun 18 16:55:35 2020 From: Davlet.Panech at windriver.com (Panech, Davlet) Date: Thu, 18 Jun 2020 16:55:35 +0000 Subject: [Starlingx-discuss] CENGN build fialures + builder docker file changes Message-ID: Hi all, The CENGN build failed today due to (in part) problems with the Dockerfile: https://opendev.org/starlingx/tools/src/branch/master/Dockerfile - It uses latest CentOS & EPEL repos to pull packages from. Even though it's based on a pinned docker image, centos:7.4.xxx, it's yum repos point to mirror.centos.org (incl updates). So the first yum command in the docker file upgrades half the system towards 7.8 or whatever - Our build scripts require mock <= 1.4.20, but what we get is version 2.x . Older compatible versions don't exist in CentOS or EPEL repos. - Dockerfile installs (towards the end) all repo files from centos-mirror-tools globally. This makes "yum install" essentially unusable in the docker image once its built because that set includes a bunch of incompatible repos e.g. CnetOS 7.x and 8.x both enabled. Note that these issues affect only the execution of build scripts -- individual RPMs are built in mock roots (inside Docker on CENGN) with their own yum configuration. Proposed changes: - Pin Dockerfile base image to centos:7.8.2003 (up from 7.4). This should be closer to what's been happening until now with latest packages being pulled in on top of a 7.4 base system as described above. - Replace global yum repo files with pinned URLs that point to 7.8 (using CentOS vault etc) . - Same for EPEL repos from here: https://archives.fedoraproject.org/pub/archive/epel/7.2020-04-20/ - Install this version of mock: https://kojipkgs.fedoraproject.org/packages/mock/1.4.16/1.el7/noarch/mock-1.4.16-1.el7.noarch.rpm . This is the only build scripts - compatible mock RPM I can find. - As a separate effort we should update build scripts to support recent mock versions. But pinning to mock 1.4.x will help with the immediate build problems Potential problem: Docker file installs anaconda, presumably because build-iso needs that (?). But if we pin centos repos, we will be creating ISO files based on the pinned anaconda packages. Unless build-iso itself runs inside mock, not sure it this is the case. Thoughts, comments? I'd like to get this fixed today if possible because it's gating CENGN builds. Thanks, D. From Davlet.Panech at windriver.com Thu Jun 18 17:08:50 2020 From: Davlet.Panech at windriver.com (Panech, Davlet) Date: Thu, 18 Jun 2020 17:08:50 +0000 Subject: [Starlingx-discuss] CENGN build fialures + builder docker file changes In-Reply-To: References: Message-ID: Until this is fixed, I believe the following workaround should work: Replace "yum install mock" in the Dockerfile with yum install \ https://kojipkgs.fedoraproject.org/packages/mock/1.4.16/1.el7/noarch/mock-1.4.16-1.el7.noarch.rpm \ https://kojipkgs.fedoraproject.org/packages/mock-core-configs/32.5/1.el7/noarch/mock-core-configs-32.5-1.el7.noarch.rpm D. ________________________________ From: Panech, Davlet Sent: June 18, 2020 12:55 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CENGN build fialures + builder docker file changes Hi all, The CENGN build failed today due to (in part) problems with the Dockerfile: https://opendev.org/starlingx/tools/src/branch/master/Dockerfile - It uses latest CentOS & EPEL repos to pull packages from. Even though it's based on a pinned docker image, centos:7.4.xxx, it's yum repos point to mirror.centos.org (incl updates). So the first yum command in the docker file upgrades half the system towards 7.8 or whatever - Our build scripts require mock <= 1.4.20, but what we get is version 2.x . Older compatible versions don't exist in CentOS or EPEL repos. - Dockerfile installs (towards the end) all repo files from centos-mirror-tools globally. This makes "yum install" essentially unusable in the docker image once its built because that set includes a bunch of incompatible repos e.g. CnetOS 7.x and 8.x both enabled. Note that these issues affect only the execution of build scripts -- individual RPMs are built in mock roots (inside Docker on CENGN) with their own yum configuration. Proposed changes: - Pin Dockerfile base image to centos:7.8.2003 (up from 7.4). This should be closer to what's been happening until now with latest packages being pulled in on top of a 7.4 base system as described above. - Replace global yum repo files with pinned URLs that point to 7.8 (using CentOS vault etc) . - Same for EPEL repos from here: https://archives.fedoraproject.org/pub/archive/epel/7.2020-04-20/ - Install this version of mock: https://kojipkgs.fedoraproject.org/packages/mock/1.4.16/1.el7/noarch/mock-1.4.16-1.el7.noarch.rpm . This is the only build scripts - compatible mock RPM I can find. - As a separate effort we should update build scripts to support recent mock versions. But pinning to mock 1.4.x will help with the immediate build problems Potential problem: Docker file installs anaconda, presumably because build-iso needs that (?). But if we pin centos repos, we will be creating ISO files based on the pinned anaconda packages. Unless build-iso itself runs inside mock, not sure it this is the case. Thoughts, comments? I'd like to get this fixed today if possible because it's gating CENGN builds. Thanks, D. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Jun 18 19:29:00 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 18 Jun 2020 21:29:00 +0200 Subject: [Starlingx-discuss] Airship - StarlingX alignment discussion Message-ID: <7CFFBAFE-E35B-473E-AF4E-7FEEC9ABDB5B@gmail.com> Hi, At the PTG[1] we had a joint session with the Airship project where we agreed to setup a call to talk more about alignment and working on materials that describe the two projects’ missions and relationship. For those who are interested to join, we will have the call tomorrow (June 19) at 9am Pacific / 1600 UTC. You can find the call details below. Thanks, Ildikó [1] https://etherpad.opendev.org/p/stx-virtual-PTG-June Join Zoom Meeting https://zoom.us/j/97881188335 Meeting ID: 978 8118 8335 One tap mobile +16699006833,,97881188335# US (San Jose) +12532158782,,97881188335# US (Tacoma) Dial by your location +1 669 900 6833 US (San Jose) +1 253 215 8782 US (Tacoma) +1 301 715 8592 US (Germantown) +1 312 626 6799 US (Chicago) +1 346 248 7799 US (Houston) +1 646 876 9923 US (New York) Meeting ID: 978 8118 8335 Find your local number: https://zoom.us/u/agmZ5UfKK From maryx.camp at intel.com Thu Jun 18 20:14:36 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 18 Jun 2020 20:14:36 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 2020-06-17 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Our tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 2020-06-17  . All -- reviews merged since last meeting:  6 . All -- bug status -- 9 total, 3 WIP o [ww25] 2 new: Change $Home in build docs and Remove hardcoded "r1" in build scripts & docs o [ww24] 3: Document build-pkg command, Document packaging esp. build-srpm.data, Add new CLI for reconfiguring subcloud [WIP] o [ww23] Fix search function. Update from 17Jun20: Jimmy from OpenStack suggests merging Ildiko's reviews to see if that fixes it. Change scope of search bar: https://review.opendev.org/#/c/733383/  and Cross job to test changes: https://review.opendev.org/#/c/733483/ o [ww23] Add instructions for building stx-openstack application [WIP] o [ww20] Networking documentation [not started] o [ww16] Build Avoidance [WIP] https://docs.starlingx.io/developer_resources/build_guide.html#build-avoidance) . Reviews in progress:    o Layered Build guide (Poornima). Email questions about why this doc is needed. AR Mary reply to email thread, propose title change "Layered Build Reference" for Scott's original guide. o Chinese document for layered build https://review.opendev.org/#/c/726737/  Yong Fu has made changes. Requested Yi Wang / Austin to give technical +1 before merge. o Rook migration editorial  NOT officially in r4.0 but code will merge quickly afterwards. AR Mary figure out how to handle. Should be tied to the code merge -- Orig review 723291 has link to story https://storyboard.openstack.org/#!/story/2005527 o Kubernetes policy guides o Add how to access STX OpenStack CLI o Modifying layered build commands (add pike / remove pike)  This review is valid for the current situation: https://review.opendev.org/#/c/717424/  . Saul's review is valid for "future" situation -- we think will be merged in next couple of weeks https://review.opendev.org/#/c/693761/  . All -- Opens o FYI: The "submit a bug link" on the doc pages points to LP now, hooray! o Bruce noted issue with code samples display problems. Color coding of  . or - in snippets makes it hard to read. AR Mary investigate. [After meeting, verified this is due to recent change to openstackdocs theme] o Short discussion about future planning: . 1. After R4 release, we will branch the documentation. This will get docs set up for item 2. . 2. Upstream the Wind River Cloud Platform docs. This activity is still in planning/resource gathering stage.  . Branching the STX Docs - notes from discussion 20May20 - keep here for reference. o Recommendation from Bart to plan a method for versioning the documentation. The current approach has these issues:   . People are making updates to the docs for previous releases in the master branch, but not in the release branch. So if someone goes to look at the docs in the r/stx.3.0 branch, they will get stale info. . Our docs web site does not allow users to see info for previous releases for some areas. For example, our REST API  Reference (https://docs.starlingx.io/api-ref/index.html) is just showing master (I think). To see the r/stx.3.0 REST APIs, the user would have to go to each repo (e.g. metal, config, nfv) and choose the branch there. That isn't a good way to access these docs. . Now we only build the master version of docs. We want to change that for future. We want the web page to allow selecting different versions like the examples above.  o Our plan is to keep updating docs in master like we're doing now. After R4 is released, then we'd create an R4 branch and cut over to the new method.  . Ask Scott/Saul to include docs in branch process when they do it. [We think they're doing this already, because someone created an r/stx.3.0 branch.] . Once we have 4.0 branch, delete all the old release folders and have only one version of the docs that we keep up to date.  . The existing R3 branch is just a throwaway because it's not updated at all. [Delete old branches, unversion the current branch.] . After this is implemented, if master is updated with something that applies to previous releases (like a bug fix), you'd have to make a similar change in the specific branch.  o We need to plan this transition. We need help to implement the web changes. . Docs for other OpenStack projects allow you to switch between versions: https://docs.openstack.org/horizon/train/ (version switcher), https://docs.openstack.org/ironic/latest/index.html (switch in URL, w/ explanation), https://docs.openstack.org/keystone/latest/install/index.html (switch in URL) . AR Mary: Follow up with Ildiko on versioning/branch the docs for better usability. Need web developer help to implement these changes. Suggestions? From Dan.Voiculeasa at windriver.com Thu Jun 18 20:20:24 2020 From: Dan.Voiculeasa at windriver.com (Voiculeasa, Dan) Date: Thu, 18 Jun 2020 20:20:24 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: , , Message-ID: Hello Martin, No, it is the first time seeing it. But I see that logic is introduced by Project: starlingx/ansible-playbooks Commit 514d4e7262f80a73ab37e0132f9e3b30088d14ad CommitDate: Wed Jun 10 13:17:00 2020 -0400 Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z Sent: Thursday, June 18, 2020 4:18 AM To: Voiculeasa, Dan ; starlingx-discuss at lists.starlingx.io Subject: RE: issue for backup and restore Hi Dan I check restore for latest code, restore will fail with such log. I used to check code base Jun 5 master branch, no such issue. You know about this? TASK [bootstrap/bringup-essential-services : Create Armada node label] ********************************************************************************************************************** fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubectl", "label", "node", "controller-0", "armada=enabled"], "delta": "0:00:00.102152", "end": "2020-06-18 00:57:32.563552", "msg": "non-zero return code", "rc": 1, "start": "2020-06-18 00:57:32.461400", "stderr": "error: 'armada' already has a value (enabled), and --overwrite is false", "stderr_lines": ["error: 'armada' already has a value (enabled), and --overwrite is false"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************************************************************************************************************************** localhost : ok=354 changed=156 unreachable=0 failed=1 [sysadmin at controller-0 ~(keystone_admin)]$ BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Thursday, June 11, 2020 10:53 AM To: Voiculeasa, Dan ; starlingx-discuss at lists.starlingx.io Subject: RE: issue for backup and restore Hi voiculeasa I confirm backup and restore works without ceph backend. This issue is caused with my improper provision step. BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 9, 2020 5:54 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello Martin, I didn't encounter that issue when testing, but also I didn't test recently without ceph backend. Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph? Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Monday, June 8, 2020 5:21 AM To: Voiculeasa, Dan >; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory\ncp: cannot stat ‘>’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory", "cp: cannot stat ‘>’: No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Jun 19 02:39:09 2020 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 19 Jun 2020 02:39:09 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - June 18/2020 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases stx.4.0 - Openstack Rebase to Ussuri - Build issue from last night are not related to Ussuri; they're generic (related to dockerfiles) and happen in stx master. - Fix is being worked by Davlet and should have a workaround - Info provided to Ussuri team to attempt another build. Will monitor the results on the mailing list. - Reviews: - https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status:merged) - All code has been reviewed. Only one review requires a minor update. - Code can merge once build/final sanity is successful - Feature Test Update - https://docs.google.com/spreadsheets/d/1C9n4aRQT7xMyTDCT5sfuZGNI9ermAX5BYRypzcCpQ6U/edit#gid=968103774 - Dates for remaining feature test: - FPGA Integration - 6/26 - TSN Support in Kata Container - 6/23 - Openstack Rebase to Ussuri - 7/3 - Regression Test Update - https://docs.google.com/spreadsheets/d/1gA3bnLS7aY2y8dKxm4MuqpWyELq3PVJMYtiHn4IWiAk/edit#gid=1717644237 - At risk - Some regression testing is running, but test-cases will need to be re-run after Ussuri is in. - Need two weeks once Ussuri is merged. Best case: July 6 From build.starlingx at gmail.com Fri Jun 19 06:44:14 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 19 Jun 2020 02:44:14 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 231 - Failure! Message-ID: <1428582770.1782.1592549055048.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 231 Status: Failure Timestamp: 20200619T021454Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T230012Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200618T230012Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T230012Z/logs MASTER_BUILD_NUMBER: 19 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200618T230012Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200618T230012Z DOCKER_BUILD_ID: jenkins-ussuri-20200618T230012Z-builder TIMESTAMP: 20200618T230012Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200618T230012Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200618T230012Z/outputs From build.starlingx at gmail.com Fri Jun 19 06:44:16 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 19 Jun 2020 02:44:16 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 19 - Still Failing! In-Reply-To: <777386976.1769.1592486578344.JavaMail.javamailuser@localhost> References: <777386976.1769.1592486578344.JavaMail.javamailuser@localhost> Message-ID: <1692420609.1785.1592549057187.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 19 Status: Still Failing Timestamp: 20200618T230012Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T230012Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From build.starlingx at gmail.com Fri Jun 19 08:23:18 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 19 Jun 2020 04:23:18 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 584 - Failure! Message-ID: <1723507669.1789.1592554998676.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 584 Status: Failure Timestamp: 20200619T080013Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200619T080013Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From build.starlingx at gmail.com Fri Jun 19 12:02:33 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 19 Jun 2020 08:02:33 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 20 - Still Failing! In-Reply-To: <1760437232.1783.1592549055626.JavaMail.javamailuser@localhost> References: <1760437232.1783.1592549055626.JavaMail.javamailuser@localhost> Message-ID: <1161974773.1793.1592568155405.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 20 Status: Still Failing Timestamp: 20200619T120014Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200619T120014Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From Davlet.Panech at windriver.com Fri Jun 19 13:03:58 2020 From: Davlet.Panech at windriver.com (Panech, Davlet) Date: Fri, 19 Jun 2020 13:03:58 +0000 Subject: [Starlingx-discuss] Ussuri & master build failures on CENGN Message-ID: Hi all, Today's build failed because https://kojipkgs.fedoraproject.org is down (it contains a tool we need during the build). We will restart the builds once its back up. We are also considering alternatives to avoid such problems in the future. D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Fri Jun 19 13:44:34 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 19 Jun 2020 13:44:34 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Hi Scott, From the latest log built out from the build you launched for ussuri, all OpenStack images build were pass, but other 4 STX service images failed to build, which is expected as I mentioned in building meeting yesterday. Those 4 STX service images are based on python2 packages. Since we upgraded OpenStack to Ussuri (requiring Python3), we could not use stx-centos-stable-wheels.tar (Python3 version), which is used for building for OpenStack Ussuri, to build those 4 images (which requires Python 2 dependent wheels). ======================================= http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200618T230012Z/logs/jenkins-STX_build_docker_flock_images-205.log.html ------------------------------------------ There were 4 failures: stx-fm-rest-api stx-keystone-api-proxy stx-nova-api-proxy stx-platformclients Build step 'Execute shell' marked build as failure Sending e-mails to: scott.little at windriver.com Extended Email Publisher is currently disabled in project settings ===================================== So, before making StarlingX platform packages completely with Python3, we have to use the previous generated stx-centos-stable-wheels.tar (Python2 version) to build those 4 images. Accordingly, the proposed build steps are as below. Step 1) Build base image Step 2) Build wheels tarball (WHEELS_USSURI) with Ussuri upper-constrains (Python 3 environment). Step 3) Build wheels tarball (WHEELS_PYTHON2) with existing Train upper-constrains (Python 2 environment), Or directly use originally built wheels for Train from Cengn. Step 4) Build docker images by using 2 wheels above. We just need to do some change in this step as below. OS=centos BUILD_STREAM=stable BRANCH=master CENTOS_BASE=root/stx-centos:master-stable-root WHEELS_USSURI=http:// WHEELS_PYTHON2=http:// $MY_REPO/build-tools/build-docker-images/build-stx-images.sh \ --os centos \ --stream ${BUILD_STREAM} \ --base ${CENTOS_BASE} \ --wheels ${ WHEELS_USSURI } \ --only // OpenStack images $MY_REPO/build-tools/build-docker-images/build-stx-images.sh \ --os centos \ --stream ${BUILD_STREAM} \ --base ${CENTOS_BASE} \ --wheels ${ WHEELS_PYTHON2 } \ --only // 4 STX service images Today, Chant and I have verified this solution by our own. So, I suggest going with this approach, before the whole platform upgrade from Python2 to Python3. Any comment or you might have another better idea? Regards, Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 1:06 To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to review below Ussuri upgrade patches again? https://review.opendev.org/#/q/topic:for_ussuri+(status:open) We need your great help to push them merge! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月15日 23:49 To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月11日 9:43 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Andy.Ning at windriver.com Fri Jun 19 15:12:40 2020 From: Andy.Ning at windriver.com (Ning, Antai (Andy)) Date: Fri, 19 Jun 2020 15:12:40 +0000 Subject: [Starlingx-discuss] [temp files] Be careful when using /tmp directory Message-ID: Hi All, This is a piece of information I hope can help trouble-shooting and avoid coding mistakes: In stx system, there is a /tmp dir which is the system default temporary file location. System command mktemp and mkstemp() in python for example will create a temp file in there if temp dir is not explicitly set by either environment variable or in the function call. We see some programs in the system are doing that. But the /tmp dir is considered as "temporary" files holder. CentOS has systemd-tmpfiles-clean.timer enabled, and it will wake systemd-tmpfiles-clean.service every day, to cleanup files in /tmp that are older than 10 days (configured in /etc/tmpfiles.d/tmp.conf). The consequence is that some programs start failing or generate errors as they assume their temp files are still there. This is the root cause for [1]. In [1], sysinv-conductor uses kubernetes python client to access k8s services. The kubernetes client caches credentials in /tmp files. If the code to access k8s services are not triggered for more than 10 days, the /tmp files are removed as part of the systemd tmpfiles cleanup. Then the client used by sysinv-conductor stops working, complaining its temp files are not found. This is a known issue with kubernetes python client as in [2]. So the conclusion are: * don't assume files in /tmp will be "permanent", be careful when use it for long lasting processes. * as kubernetes python client caches info in /tmp by default, either release the client after each use of it, or change its default temp dir by setting environment variable such as TMPDIR. [1] https://bugs.launchpad.net/starlingx/+bug/1883599 [2] https://github.com/kubernetes-client/python/issues/765 Hope this help, Andy /tmp is cleaned up every 10 days and point to the LP and explain the issue we had in this particular case. This impacts any StarlingX component that (mistakenly) assumes that the /tmp dir is an OK place to anything but very short-lived files. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Fri Jun 19 15:19:48 2020 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Fri, 19 Jun 2020 15:19:48 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> , <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> Message-ID: +1 Zhipeng for stx/upstream (I am not a core on openstack-armada-app) Al ________________________________ From: Hu, Yong Sent: Tuesday, June 16, 2020 10:23 PM To: Miller, Frank ; starlingx Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Thank Frank for this +1. We need another +1 from other cores. Regards, Yong On 2020/6/17, 4:22 AM, "Miller, Frank" wrote: I think it makes sense to add 1 prime to existing repos. As I see a lot of commits and review comments and emails on the mailing list from Zhipeng I would think it makes the most sense to add Zhipeng as a Core reviewer to the openstack-armada-app and stx-upstream repos. Frank -----Original Message----- From: Hu, Yong Sent: Tuesday, June 09, 2020 11:32 PM To: starlingx Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks! Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Fri Jun 19 15:24:07 2020 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 19 Jun 2020 15:24:07 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> , <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> Message-ID: +1. I’ve added Zhipeng as a core to upstream. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Friday, June 19, 2020 11:20 AM To: Hu, Yong; Miller, Frank; starlingx Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream +1 Zhipeng for stx/upstream (I am not a core on openstack-armada-app) Al ________________________________ From: Hu, Yong Sent: Tuesday, June 16, 2020 10:23 PM To: Miller, Frank ; starlingx Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Thank Frank for this +1. We need another +1 from other cores. Regards, Yong On 2020/6/17, 4:22 AM, "Miller, Frank" wrote: I think it makes sense to add 1 prime to existing repos. As I see a lot of commits and review comments and emails on the mailing list from Zhipeng I would think it makes the most sense to add Zhipeng as a Core reviewer to the openstack-armada-app and stx-upstream repos. Frank -----Original Message----- From: Hu, Yong Sent: Tuesday, June 09, 2020 11:32 PM To: starlingx Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks! Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Fri Jun 19 16:46:00 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 19 Jun 2020 16:46:00 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200619T020124Z Message-ID: Sanity Test from 2020-June-19 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200619T020124Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200619T020124Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image002.png at 01D64672.3F9936B0] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 20507 bytes Desc: image002.png URL: From Davlet.Panech at windriver.com Fri Jun 19 19:13:40 2020 From: Davlet.Panech at windriver.com (Panech, Davlet) Date: Fri, 19 Jun 2020 19:13:40 +0000 Subject: [Starlingx-discuss] CENGN build fialures + builder docker file changes In-Reply-To: References: , Message-ID: Correction: yum install \ http://mirror.starlingx.cengn.ca/mirror/centos/epel/dl.fedoraproject.org/pub/epel/7/x86_64/Packages/m/mock-1.4.16-1.el7.noarch.rpm \ http://mirror.starlingx.cengn.ca/mirror/centos/epel/dl.fedoraproject.org/pub/epel/7/x86_64/Packages/m/mock-core-configs-31.6-1.el7.noarch.rpm ________________________________ From: Panech, Davlet Sent: June 18, 2020 1:08 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CENGN build fialures + builder docker file changes Until this is fixed, I believe the following workaround should work: Replace "yum install mock" in the Dockerfile with yum install \ https://kojipkgs.fedoraproject.org/packages/mock/1.4.16/1.el7/noarch/mock-1.4.16-1.el7.noarch.rpm \ https://kojipkgs.fedoraproject.org/packages/mock-core-configs/32.5/1.el7/noarch/mock-core-configs-32.5-1.el7.noarch.rpm D. ________________________________ From: Panech, Davlet Sent: June 18, 2020 12:55 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CENGN build fialures + builder docker file changes Hi all, The CENGN build failed today due to (in part) problems with the Dockerfile: https://opendev.org/starlingx/tools/src/branch/master/Dockerfile - It uses latest CentOS & EPEL repos to pull packages from. Even though it's based on a pinned docker image, centos:7.4.xxx, it's yum repos point to mirror.centos.org (incl updates). So the first yum command in the docker file upgrades half the system towards 7.8 or whatever - Our build scripts require mock <= 1.4.20, but what we get is version 2.x . Older compatible versions don't exist in CentOS or EPEL repos. - Dockerfile installs (towards the end) all repo files from centos-mirror-tools globally. This makes "yum install" essentially unusable in the docker image once its built because that set includes a bunch of incompatible repos e.g. CnetOS 7.x and 8.x both enabled. Note that these issues affect only the execution of build scripts -- individual RPMs are built in mock roots (inside Docker on CENGN) with their own yum configuration. Proposed changes: - Pin Dockerfile base image to centos:7.8.2003 (up from 7.4). This should be closer to what's been happening until now with latest packages being pulled in on top of a 7.4 base system as described above. - Replace global yum repo files with pinned URLs that point to 7.8 (using CentOS vault etc) . - Same for EPEL repos from here: https://archives.fedoraproject.org/pub/archive/epel/7.2020-04-20/ - Install this version of mock: https://kojipkgs.fedoraproject.org/packages/mock/1.4.16/1.el7/noarch/mock-1.4.16-1.el7.noarch.rpm . This is the only build scripts - compatible mock RPM I can find. - As a separate effort we should update build scripts to support recent mock versions. But pinning to mock 1.4.x will help with the immediate build problems Potential problem: Docker file installs anaconda, presumably because build-iso needs that (?). But if we pin centos repos, we will be creating ISO files based on the pinned anaconda packages. Unless build-iso itself runs inside mock, not sure it this is the case. Thoughts, comments? I'd like to get this fixed today if possible because it's gating CENGN builds. Thanks, D. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Fri Jun 19 23:14:08 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 19 Jun 2020 23:14:08 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Hi Chris and Bob, Could you help add +2 and W+1 for last patch again? https://review.opendev.org/731668 Fix render error in cinder during openstack-helm rebase Last comment is for commit message, and the patch has been updated for commit message. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 1:06 To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to review below Ussuri upgrade patches again? https://review.opendev.org/#/q/topic:for_ussuri+(status:open) We need your great help to push them merge! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月15日 23:49 To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月11日 9:43 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Sat Jun 20 02:07:07 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 19 Jun 2020 22:07:07 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 232 - Still Failing! In-Reply-To: <685657349.1780.1592549053135.JavaMail.javamailuser@localhost> References: <685657349.1780.1592549053135.JavaMail.javamailuser@localhost> Message-ID: <574658516.1799.1592618867268.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 232 Status: Still Failing Timestamp: 20200619T214631Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200619T185046Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200619T185046Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200619T185046Z/logs MASTER_BUILD_NUMBER: 21 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200619T185046Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200619T185046Z DOCKER_BUILD_ID: jenkins-ussuri-20200619T185046Z-builder TIMESTAMP: 20200619T185046Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200619T185046Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200619T185046Z/outputs From build.starlingx at gmail.com Sat Jun 20 02:07:48 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 19 Jun 2020 22:07:48 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 21 - Still Failing! In-Reply-To: <1658556985.1791.1592568149587.JavaMail.javamailuser@localhost> References: <1658556985.1791.1592568149587.JavaMail.javamailuser@localhost> Message-ID: <1606152674.1802.1592618869505.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 21 Status: Still Failing Timestamp: 20200619T185046Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200619T185046Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From build.starlingx at gmail.com Sat Jun 20 11:20:20 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 20 Jun 2020 07:20:20 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 233 - Still Failing! In-Reply-To: <2023927515.1797.1592618826349.JavaMail.javamailuser@localhost> References: <2023927515.1797.1592618826349.JavaMail.javamailuser@localhost> Message-ID: <1449129134.1808.1592652021246.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 233 Status: Still Failing Timestamp: 20200620T064343Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200620T033752Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200620T033752Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200620T033752Z/logs MASTER_BUILD_NUMBER: 22 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200620T033752Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200620T033752Z DOCKER_BUILD_ID: jenkins-ussuri-20200620T033752Z-builder TIMESTAMP: 20200620T033752Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200620T033752Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200620T033752Z/outputs From build.starlingx at gmail.com Sat Jun 20 11:20:22 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 20 Jun 2020 07:20:22 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 22 - Still Failing! In-Reply-To: <916960316.1800.1592618867866.JavaMail.javamailuser@localhost> References: <916960316.1800.1592618867866.JavaMail.javamailuser@localhost> Message-ID: <1444827110.1811.1592652023746.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 22 Status: Still Failing Timestamp: 20200620T033752Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200620T033752Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From build.starlingx at gmail.com Sat Jun 20 19:24:55 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 20 Jun 2020 15:24:55 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 234 - Still Failing! In-Reply-To: <1345594144.1806.1592652019207.JavaMail.javamailuser@localhost> References: <1345594144.1806.1592652019207.JavaMail.javamailuser@localhost> Message-ID: <1818293832.1815.1592681096558.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 234 Status: Still Failing Timestamp: 20200620T150245Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200620T120010Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200620T120010Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200620T120010Z/logs MASTER_BUILD_NUMBER: 23 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200620T120010Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200620T120010Z DOCKER_BUILD_ID: jenkins-ussuri-20200620T120010Z-builder TIMESTAMP: 20200620T120010Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200620T120010Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200620T120010Z/outputs From build.starlingx at gmail.com Sat Jun 20 19:24:58 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 20 Jun 2020 15:24:58 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 23 - Still Failing! In-Reply-To: <2026043833.1809.1592652021901.JavaMail.javamailuser@localhost> References: <2026043833.1809.1592652021901.JavaMail.javamailuser@localhost> Message-ID: <1375751768.1818.1592681098783.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 23 Status: Still Failing Timestamp: 20200620T120010Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200620T120010Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From alexandru.dimofte at intel.com Sat Jun 20 20:37:00 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sat, 20 Jun 2020 20:37:00 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200620T021946Z Message-ID: Sanity Test from 2020-June-20 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200620T021946Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200620T021946Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D6475B.AF112220] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20507 bytes Desc: image003.png URL: From build.starlingx at gmail.com Sun Jun 21 08:45:36 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 21 Jun 2020 04:45:36 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images_ussuri - Build # 1 - Failure! Message-ID: <347473716.1827.1592729136673.JavaMail.javamailuser@localhost> Project: STX_build_docker_images_ussuri Build #: 1 Status: Failure Timestamp: 20200621T061928Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200621T031452Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs MASTER_BUILD_NUMBER: 24 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200621T031452Z DOCKER_BUILD_ID: jenkins-ussuri-20200621T031452Z-builder TIMESTAMP: 20200621T031452Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/outputs From build.starlingx at gmail.com Sun Jun 21 08:45:33 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 21 Jun 2020 04:45:33 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_flock_images_ussuri - Build # 1 - Failure! Message-ID: <756595350.1824.1592729134270.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images_ussuri Build #: 1 Status: Failure Timestamp: 20200621T063808Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200621T031452Z OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root BASE_VERSION: ussuri-stable PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: ussuri PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs SERVICES_PYTHON2: stx-fm-rest-api,stx-keystone-api-proxy,stx-nova-api-proxy,stx-platformclients PUBLISH_TIMESTAMP: 20200621T031452Z FLOCK_VERSION: ussuri-centos-stable-20200621T031452Z PREFIX: ussuri PUBLISH_OUTPUTS_BASE_PYTHON2: /export/mirror/starlingx/master/centos/monolithic/latest_docker_image_build/outputs TIMESTAMP: 20200621T031452Z BUILD_STREAM: stable REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Sun Jun 21 08:45:38 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 21 Jun 2020 04:45:38 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 24 - Still Failing! In-Reply-To: <1064393933.1816.1592681097111.JavaMail.javamailuser@localhost> References: <1064393933.1816.1592681097111.JavaMail.javamailuser@localhost> Message-ID: <1950500521.1830.1592729138815.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 24 Status: Still Failing Timestamp: 20200621T031452Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From alexandru.dimofte at intel.com Sun Jun 21 21:44:53 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sun, 21 Jun 2020 21:44:53 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200621T013427Z Message-ID: Sanity Test from 2020-June-21 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200621T013427Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200621T013427Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D6482E.5415D220] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20512 bytes Desc: image003.png URL: From zhipengs.liu at intel.com Mon Jun 22 01:44:32 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 22 Jun 2020 01:44:32 +0000 Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_flock_images_ussuri - Build # 1 - Failure! In-Reply-To: <756595350.1824.1592729134270.JavaMail.javamailuser@localhost> References: <756595350.1824.1592729134270.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, It seems push image failed. The following images were built: jenkins/dex:ussuri-centos-stable-build jenkins/k8s-cni-sriov:ussuri-centos-stable-build jenkins/k8s-plugins-sriov-network-device:ussuri-centos-stable-build jenkins/n3000-opae:ussuri-centos-stable-build jenkins/rvmc:ussuri-centos-stable-build jenkins/stx-aodh:ussuri-centos-stable-build jenkins/stx-barbican:ussuri-centos-stable-build jenkins/stx-ceilometer:ussuri-centos-stable-build jenkins/stx-cinder:ussuri-centos-stable-build jenkins/stx-glance:ussuri-centos-stable-build jenkins/stx-gnocchi:ussuri-centos-stable-build jenkins/stx-heat:ussuri-centos-stable-build jenkins/stx-horizon:ussuri-centos-stable-build jenkins/stx-ironic:ussuri-centos-stable-build jenkins/stx-keystone:ussuri-centos-stable-build jenkins/stx-libvirt:ussuri-centos-stable-build jenkins/stx-mariadb:ussuri-centos-stable-build jenkins/stx-neutron:ussuri-centos-stable-build jenkins/stx-nova:ussuri-centos-stable-build jenkins/stx-oidc-client:ussuri-centos-stable-build jenkins/stx-openstackclients:ussuri-centos-stable-build jenkins/stx-ovs:ussuri-centos-stable-build jenkins/stx-panko:ussuri-centos-stable-build jenkins/stx-placement:ussuri-centos-stable-build + --attempts 5 --push --latest --clean /tmp/jenkins8870387747521138351.sh: line 60: --attempts: command not found Build step 'Execute shell' marked build as failure Sending e-mails to: scott.little at windriver.com Email was triggered for: Failure - Any Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月21日 16:46 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_flock_images_ussuri - Build # 1 - Failure! Project: STX_build_docker_flock_images_ussuri Build #: 1 Status: Failure Timestamp: 20200621T063808Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200621T031452Z OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root BASE_VERSION: ussuri-stable PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: ussuri PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs SERVICES_PYTHON2: stx-fm-rest-api,stx-keystone-api-proxy,stx-nova-api-proxy,stx-platformclients PUBLISH_TIMESTAMP: 20200621T031452Z FLOCK_VERSION: ussuri-centos-stable-20200621T031452Z PREFIX: ussuri PUBLISH_OUTPUTS_BASE_PYTHON2: /export/mirror/starlingx/master/centos/monolithic/latest_docker_image_build/outputs TIMESTAMP: 20200621T031452Z BUILD_STREAM: stable REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/outputs REGISTRY: docker.io From zhipengs.liu at intel.com Mon Jun 22 02:47:34 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 22 Jun 2020 02:47:34 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Hi Chris and Bob, Could you help to rebase below patch and W+1 now? https://review.opendev.org/#/c/731461/ Patches merge are ongoing, and 5/9 patches have already been merged But stopped at this patch, it might need trigger rebase to start gate job. It will block daily build if not merge all these 9 patches together. Thanks a lot! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月20日 7:14 To: 'Friesen, Chris' ; 'Church, Robert' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help add +2 and W+1 for last patch again? https://review.opendev.org/731668 Fix render error in cinder during openstack-helm rebase Last comment is for commit message, and the patch has been updated for commit message. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 1:06 To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to review below Ussuri upgrade patches again? https://review.opendev.org/#/q/topic:for_ussuri+(status:open) We need your great help to push them merge! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月15日 23:49 To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月11日 9:43 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Mon Jun 22 03:01:26 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 22 Jun 2020 03:01:26 +0000 Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_flock_images_ussuri - Build # 1 - Failure! In-Reply-To: References: <756595350.1824.1592729134270.JavaMail.javamailuser@localhost> Message-ID: Maybe you disabled image publish, right? Since we need to wait for all ussuri patches get merged before publishing these OpenStack images. Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月22日 9:45 To: build.starlingx at gmail.com; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [stable] [build-report] STX_build_docker_flock_images_ussuri - Build # 1 - Failure! Hi Scott, It seems push image failed. The following images were built: jenkins/dex:ussuri-centos-stable-build jenkins/k8s-cni-sriov:ussuri-centos-stable-build jenkins/k8s-plugins-sriov-network-device:ussuri-centos-stable-build jenkins/n3000-opae:ussuri-centos-stable-build jenkins/rvmc:ussuri-centos-stable-build jenkins/stx-aodh:ussuri-centos-stable-build jenkins/stx-barbican:ussuri-centos-stable-build jenkins/stx-ceilometer:ussuri-centos-stable-build jenkins/stx-cinder:ussuri-centos-stable-build jenkins/stx-glance:ussuri-centos-stable-build jenkins/stx-gnocchi:ussuri-centos-stable-build jenkins/stx-heat:ussuri-centos-stable-build jenkins/stx-horizon:ussuri-centos-stable-build jenkins/stx-ironic:ussuri-centos-stable-build jenkins/stx-keystone:ussuri-centos-stable-build jenkins/stx-libvirt:ussuri-centos-stable-build jenkins/stx-mariadb:ussuri-centos-stable-build jenkins/stx-neutron:ussuri-centos-stable-build jenkins/stx-nova:ussuri-centos-stable-build jenkins/stx-oidc-client:ussuri-centos-stable-build jenkins/stx-openstackclients:ussuri-centos-stable-build jenkins/stx-ovs:ussuri-centos-stable-build jenkins/stx-panko:ussuri-centos-stable-build jenkins/stx-placement:ussuri-centos-stable-build + --attempts 5 --push --latest --clean /tmp/jenkins8870387747521138351.sh: line 60: --attempts: command not found Build step 'Execute shell' marked build as failure Sending e-mails to: scott.little at windriver.com Email was triggered for: Failure - Any Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月21日 16:46 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_flock_images_ussuri - Build # 1 - Failure! Project: STX_build_docker_flock_images_ussuri Build #: 1 Status: Failure Timestamp: 20200621T063808Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200621T031452Z OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root BASE_VERSION: ussuri-stable PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: ussuri PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/logs SERVICES_PYTHON2: stx-fm-rest-api,stx-keystone-api-proxy,stx-nova-api-proxy,stx-platformclients PUBLISH_TIMESTAMP: 20200621T031452Z FLOCK_VERSION: ussuri-centos-stable-20200621T031452Z PREFIX: ussuri PUBLISH_OUTPUTS_BASE_PYTHON2: /export/mirror/starlingx/master/centos/monolithic/latest_docker_image_build/outputs TIMESTAMP: 20200621T031452Z BUILD_STREAM: stable REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200621T031452Z/outputs REGISTRY: docker.io _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Mon Jun 22 12:44:28 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 22 Jun 2020 12:44:28 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Zhipeng: The most recent emails from the Ussuri test builds from the weekend are they were still failing. I don’t understand why the Ussuri commits were allowed to merge before the build failures are addressed. What is the current status of the Ussuri builds? If the builds are still failing then you need to revert these commits and put a WFL -1 on the main commit and have a dependency linked to the other 8 commits so that none of the 9 commits merge until the builds pass. Then you can remove the WFL -1 on the 9 commits and let them merge as a batch. Frank -----Original Message----- From: Liu, ZhipengS Sent: Sunday, June 21, 2020 10:48 PM To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to rebase below patch and W+1 now? https://review.opendev.org/#/c/731461/ Patches merge are ongoing, and 5/9 patches have already been merged But stopped at this patch, it might need trigger rebase to start gate job. It will block daily build if not merge all these 9 patches together. Thanks a lot! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月20日 7:14 To: 'Friesen, Chris' ; 'Church, Robert' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help add +2 and W+1 for last patch again? https://review.opendev.org/731668 Fix render error in cinder during openstack-helm rebase Last comment is for commit message, and the patch has been updated for commit message. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 1:06 To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to review below Ussuri upgrade patches again? https://review.opendev.org/#/q/topic:for_ussuri+(status:open) We need your great help to push them merge! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月15日 23:49 To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月11日 9:43 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Mon Jun 22 13:00:07 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 22 Jun 2020 13:00:07 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Hi Frank, You might miss my several emails. The ussuri images build has already been pass. From latest build failing log, it was just not published to docker.io Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月22日 20:44 To: Liu, ZhipengS ; Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: The most recent emails from the Ussuri test builds from the weekend are they were still failing. I don’t understand why the Ussuri commits were allowed to merge before the build failures are addressed. What is the current status of the Ussuri builds? If the builds are still failing then you need to revert these commits and put a WFL -1 on the main commit and have a dependency linked to the other 8 commits so that none of the 9 commits merge until the builds pass. Then you can remove the WFL -1 on the 9 commits and let them merge as a batch. Frank -----Original Message----- From: Liu, ZhipengS Sent: Sunday, June 21, 2020 10:48 PM To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to rebase below patch and W+1 now? https://review.opendev.org/#/c/731461/ Patches merge are ongoing, and 5/9 patches have already been merged But stopped at this patch, it might need trigger rebase to start gate job. It will block daily build if not merge all these 9 patches together. Thanks a lot! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月20日 7:14 To: 'Friesen, Chris' ; 'Church, Robert' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help add +2 and W+1 for last patch again? https://review.opendev.org/731668 Fix render error in cinder during openstack-helm rebase Last comment is for commit message, and the patch has been updated for commit message. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 1:06 To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to review below Ussuri upgrade patches again? https://review.opendev.org/#/q/topic:for_ussuri+(status:open) We need your great help to push them merge! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月15日 23:49 To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月11日 9:43 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Mon Jun 22 13:20:08 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 22 Jun 2020 13:20:08 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Thanks Zhipeng. Did the issue with the 4 docker images failing to build due to requiring python3 get addressed? Ie these ones from your email on Friday: stx-fm-rest-api stx-keystone-api-proxy stx-nova-api-proxy stx-platformclients I don’t see them included in your list of images build in your email yesterday. Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, June 22, 2020 9:00 AM To: Miller, Frank ; Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, You might miss my several emails. The ussuri images build has already been pass. From latest build failing log, it was just not published to docker.io Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月22日 20:44 To: Liu, ZhipengS ; Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: The most recent emails from the Ussuri test builds from the weekend are they were still failing. I don’t understand why the Ussuri commits were allowed to merge before the build failures are addressed. What is the current status of the Ussuri builds? If the builds are still failing then you need to revert these commits and put a WFL -1 on the main commit and have a dependency linked to the other 8 commits so that none of the 9 commits merge until the builds pass. Then you can remove the WFL -1 on the 9 commits and let them merge as a batch. Frank -----Original Message----- From: Liu, ZhipengS Sent: Sunday, June 21, 2020 10:48 PM To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to rebase below patch and W+1 now? https://review.opendev.org/#/c/731461/ Patches merge are ongoing, and 5/9 patches have already been merged But stopped at this patch, it might need trigger rebase to start gate job. It will block daily build if not merge all these 9 patches together. Thanks a lot! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月20日 7:14 To: 'Friesen, Chris' ; 'Church, Robert' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help add +2 and W+1 for last patch again? https://review.opendev.org/731668 Fix render error in cinder during openstack-helm rebase Last comment is for commit message, and the patch has been updated for commit message. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 1:06 To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to review below Ussuri upgrade patches again? https://review.opendev.org/#/q/topic:for_ussuri+(status:open) We need your great help to push them merge! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月15日 23:49 To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月11日 9:43 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Mon Jun 22 13:27:17 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 22 Jun 2020 13:27:17 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Hi Frank, I have sent an email about these 4 python2 based docker images(attached). And I also give out my proposed solution in this email. I saw that Scott already changed his cengn script in latest build log. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月22日 21:20 To: Liu, ZhipengS ; Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Did the issue with the 4 docker images failing to build due to requiring python3 get addressed? Ie these ones from your email on Friday: stx-fm-rest-api stx-keystone-api-proxy stx-nova-api-proxy stx-platformclients I don’t see them included in your list of images build in your email yesterday. Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, June 22, 2020 9:00 AM To: Miller, Frank ; Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, You might miss my several emails. The ussuri images build has already been pass. From latest build failing log, it was just not published to docker.io Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月22日 20:44 To: Liu, ZhipengS ; Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: The most recent emails from the Ussuri test builds from the weekend are they were still failing. I don’t understand why the Ussuri commits were allowed to merge before the build failures are addressed. What is the current status of the Ussuri builds? If the builds are still failing then you need to revert these commits and put a WFL -1 on the main commit and have a dependency linked to the other 8 commits so that none of the 9 commits merge until the builds pass. Then you can remove the WFL -1 on the 9 commits and let them merge as a batch. Frank -----Original Message----- From: Liu, ZhipengS Sent: Sunday, June 21, 2020 10:48 PM To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to rebase below patch and W+1 now? https://review.opendev.org/#/c/731461/ Patches merge are ongoing, and 5/9 patches have already been merged But stopped at this patch, it might need trigger rebase to start gate job. It will block daily build if not merge all these 9 patches together. Thanks a lot! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月20日 7:14 To: 'Friesen, Chris' ; 'Church, Robert' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help add +2 and W+1 for last patch again? https://review.opendev.org/731668 Fix render error in cinder during openstack-helm rebase Last comment is for commit message, and the patch has been updated for commit message. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 1:06 To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to review below Ussuri upgrade patches again? https://review.opendev.org/#/q/topic:for_ussuri+(status:open) We need your great help to push them merge! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月15日 23:49 To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月11日 9:43 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An embedded message was scrubbed... From: "Liu, ZhipengS" Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Date: Fri, 19 Jun 2020 13:44:34 +0000 Size: 60806 URL: From sgw at linux.intel.com Mon Jun 22 15:27:46 2020 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 22 Jun 2020 08:27:46 -0700 Subject: [Starlingx-discuss] CENGN build fialures + builder docker file changes In-Reply-To: References: Message-ID: Davlet, Did you backport this to the r/stx.3.0 branch? It will have the same problem as master correct? Do we just take the mock pinning change or the full update? Sau! On 6/19/20 12:13 PM, Panech, Davlet wrote: > Correction: > > yum install \ > http://mirror.starlingx.cengn.ca/mirror/centos/epel/dl.fedoraproject.org/pub/epel/7/x86_64/Packages/m/mock-1.4.16-1.el7.noarch.rpm > \ > http://mirror.starlingx.cengn.ca/mirror/centos/epel/dl.fedoraproject.org/pub/epel/7/x86_64/Packages/m/mock-core-configs-31.6-1.el7.noarch.rpm > > ------------------------------------------------------------------------ > *From:* Panech, Davlet > *Sent:* June 18, 2020 1:08 PM > *To:* starlingx-discuss at lists.starlingx.io > > *Subject:* Re: [Starlingx-discuss] CENGN build fialures + builder docker > file changes > Until this is fixed, I believe the following workaround should work: > > Replace "yum install mock" in the Dockerfile with > > yum install \ > > https://kojipkgs.fedoraproject.org/packages/mock/1.4.16/1.el7/noarch/mock-1.4.16-1.el7.noarch.rpm > \ > > https://kojipkgs.fedoraproject.org/packages/mock-core-configs/32.5/1.el7/noarch/mock-core-configs-32.5-1.el7.noarch.rpm > > D. > ------------------------------------------------------------------------ > *From:* Panech, Davlet > *Sent:* June 18, 2020 12:55 PM > *To:* starlingx-discuss at lists.starlingx.io > > *Subject:* [Starlingx-discuss] CENGN build fialures + builder docker > file changes > Hi all, > > The CENGN build failed today due to (in part) problems with the Dockerfile: > > https://opendev.org/starlingx/tools/src/branch/master/Dockerfile > > - It uses latest CentOS & EPEL repos to pull packages from. Even though > it's based on a pinned docker image, centos:7.4.xxx, it's yum repos > point to mirror.centos.org (incl updates). So the first yum command in > the docker file upgrades half the system towards 7.8 or whatever > > - Our build scripts require mock <= 1.4.20, but what we get is version > 2.x . Older compatible versions don't exist in CentOS or EPEL repos. > > - Dockerfile installs (towards the end) all repo files from > centos-mirror-tools globally. This makes "yum install" essentially > unusable in the docker image once its built because that set includes a > bunch of incompatible repos e.g. CnetOS 7.x and 8.x both enabled. > > Note that these issues affect only the execution of build scripts -- > individual RPMs are built in mock roots (inside Docker on CENGN) with > their own yum configuration. > > Proposed changes: > > - Pin Dockerfile base image to centos:7.8.2003 (up from 7.4). This > should be closer to what's been happening until now with latest packages > being pulled in on top of a 7.4 base system as described above. > - Replace global yum repo files with pinned URLs that point to 7.8 > (using CentOS vault etc) . > - Same for EPEL repos from here: > https://archives.fedoraproject.org/pub/archive/epel/7.2020-04-20/ > - Install this version of mock: > https://kojipkgs.fedoraproject.org/packages/mock/1.4.16/1.el7/noarch/mock-1.4.16-1.el7.noarch.rpm > . This is the only build scripts - compatible mock RPM I can find. > - As a separate effort we should update build scripts to support recent > mock versions. But pinning to mock 1.4.x will help with the immediate > build problems > > Potential problem: Docker file installs anaconda, presumably because > build-iso needs that (?). But if we pin centos repos, we will be > creating ISO files based on the pinned anaconda packages. Unless > build-iso itself runs inside mock, not sure it this is the case. > > Thoughts, comments? > > I'd like to get this fixed today if possible because it's gating CENGN > builds. > > Thanks, > D. > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From alexandru.dimofte at intel.com Mon Jun 22 16:30:04 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Mon, 22 Jun 2020 16:30:04 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200621T230407Z Message-ID: Sanity Test from 2020-June-22 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200621T230407Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200621T230407Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D648CB.852936C0] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20512 bytes Desc: image003.png URL: From scott.little at windriver.com Mon Jun 22 17:32:04 2020 From: scott.little at windriver.com (Scott Little) Date: Mon, 22 Jun 2020 13:32:04 -0400 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: <32670a41-c1dc-50b9-9297-4bb0af1cb308@windriver.com> The ussuri build only passed because I created customized build scripts for that feature branch.  This was to prove that the python2/3 issues were the only issues with the build.  I was in no way committing to deliver those customizations into the master branch build. 1) We would loose the ability to build on older branches without significant extra effort. 2) It would be very fragile.  It relies on hard code list of packages that are to be compiled for python3 vs python2.  I'm sure that list will be changing over time. What I would like to see is build-stx-images.sh modified to look for and consume a config file that tells it how to partition wheels and images into two separate builds.  Externally, the command remains a single invocation with no new arguments. The config file could then be modified to individually shift images from python2 to python3 build method without having to tinker with cengn build scripts every time there is a change. Scott On 2020-06-22 9:00 a.m., Liu, ZhipengS wrote: > Hi Frank, > > You might miss my several emails. > The ussuri images build has already been pass. > From latest build failing log, it was just not published to docker.io > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年6月22日 20:44 > To: Liu, ZhipengS ; Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > The most recent emails from the Ussuri test builds from the weekend are they were still failing. I don’t understand why the Ussuri commits were allowed to merge before the build failures are addressed. What is the current status of the Ussuri builds? > > If the builds are still failing then you need to revert these commits and put a WFL -1 on the main commit and have a dependency linked to the other 8 commits so that none of the 9 commits merge until the builds pass. Then you can remove the WFL -1 on the 9 commits and let them merge as a batch. > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Sunday, June 21, 2020 10:48 PM > To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Chris and Bob, > > Could you help to rebase below patch and W+1 now? > https://review.opendev.org/#/c/731461/ > > Patches merge are ongoing, and 5/9 patches have already been merged But stopped at this patch, it might need trigger rebase to start gate job. > It will block daily build if not merge all these 9 patches together. > > Thanks a lot! > Zhipeng > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月20日 7:14 > To: 'Friesen, Chris' ; 'Church, Robert' ; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Chris and Bob, > > Could you help add +2 and W+1 for last patch again? > https://review.opendev.org/731668 > Fix render error in cinder during openstack-helm rebase > > Last comment is for commit message, and the patch has been updated for commit message. > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月17日 1:06 > To: Friesen, Chris ; Church, Robert ; starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Chris and Bob, > > Could you help to review below Ussuri upgrade patches again? > https://review.opendev.org/#/q/topic:for_ussuri+(status:open) > We need your great help to push them merge! > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月15日 23:49 > To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Chris, > > Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. > https://review.opendev.org/#/q/topic:for_ussuri+(status:open) > Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. > Just want to know if you still have further concern on these patches. > As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. > > Thanks!! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月11日 9:43 > To: Scott Little ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Scott, > > I have fixed merge conflict now! > If you have any concern, please let me know. > > Thanks! > Zhipeng > > -----Original Message----- > From: Scott Little > Sent: 2020年6月11日 4:28 > To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Six of the nine updates are in a state of merge conflict. > > Please resolve the conflicts so that I can make progress wit a CENGN build. > > Scott > > > > On 2020-06-10 9:20 a.m., Scott Little wrote: >> CENGN cycles aren't a problem.  People resources is a challenge. >> >> So the ask is for a manual build, on CENGN, adding in the nine patches >> listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). >> >> .. and the addition of two repos to the build-stx-base.sh step >> >> build-stx-base.sh >>    --repo local-stx-build,... \ >>    --repo stx-distro,... \ >>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>    --repo >> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >> >> >> Is that correct? >> >> Scott >> >> >> On 2020-06-09 9:04 a.m., Saul Wold wrote: >>> Frank, Scott, Davelet: >>> >>> Are there cycles available on Cengn (and people resources) to do a >>> Cengn build with the Ussuri patch set applied?  I know this is >>> different than a branch build.  I think we have done this kind of >>> thing in the past. >>> >>> This might help to make sure we don't have any more Cengn build >>> issues and could give the Test team a sanity spin with a Ussuri/Cengn >>> build. >>> >>> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >>> email. >>> >>> Thanks >>>   Sau! >>> >>> >>> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>>> Hi all, >>>> >>>> So far, all block issues and concerns have been addressed. >>>> Since we have passed all sanity test, and Ussuri OpenStack has been >>>> officially released last month, there should be no more reason to >>>> block these patches merge. >>>> >>>> Next step: >>>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>>> merged. We need great help from core guys! >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>>> >>>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>>> patch with workflow-1 and add depends-on for other patches as we >>>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>>> database config override to support ipv6 zhipeng liu >>>> starlingx/openstack-armada-app Fix render error in cinder during >>>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>>> Update download list for openstack-helm upgrade zhipeng liu >>>> starlingx/openstack-armada-app Update manifest.yaml file for >>>> openstack-helm upgrade. >>>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>>> zhipeng liu starlingx/openstack-armada-app >>>> >>>> # Below 3 patches is for OpenStack upgrade. >>>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>>> docker images for stable/ussuri        YU CHENGDE starlingx/upstream >>>> >>>> >>>> After removing required python3 dependent packages from local, we >>>> can build out base image and OpenStack service images successfully >>>> with below command. >>>> ==================================================================== >>>> =========== >>>> >>>> @Scott, please help to update cengn build script with below 2 >>>> additional repos and help to trigger image build build-stx-base.sh >>>>    --repo local-stx-build,... \ >>>>    --repo stx-distro,... \ >>>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>>> \ >>>>    --repo >>>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>>> >>>> Thanks a lot! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年6月8日 16:54 >>>> To: 'Miller, Frank' ; >>>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> It is not easy to figure out whether/how/when OpenStack-helm-info >>>> upstream introduce this issue and then fix it. >>>> I also could not find any fix in LP[1], which just mentioned that >>>> this intermittent issue not hit us after some changes in related field. >>>> >>>> Anyhow, below 2 patches should fix potential bug and I could not see >>>> the same error log again in our ussuri upgrade EB. >>>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>>> avoid state management thread death >>>> >>>> Since we have passed fully test, we'd better push to merge ussuri >>>> upgrade/openstack-helm rebasing patches soon. >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>>> >>>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>>> >>>> Thanks! >>>> Zhipeng >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月5日 22:32 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> This looks promising.  Your theory is that the 2 >>>> openstack-helm-infra patches will fix the mariadb recovery issues. >>>> These 2 patches were merged in the openstack-helm-infra project in >>>> January and February of 2020.   What would be good to know is what >>>> broke mariadb recovery between April of 2019 when Chris Friesen >>>> finished up his story [1] and our current loads today.  The most >>>> likely explanation is the upversion of Train or the upversion to >>>> openstack-helm-infra done in November 2019 introduced the mariadb >>>> recovery issues.  And then the openstack-helm folks found and fixed >>>> the issue earlier in 2020. >>>> >>>> If we had more time the preferred approach would be to merge just >>>> the openstack-helm-infra changes first to prove they address mariadb >>>> recovery and then in a separate commit merge Ussuri.  But since you >>>> have validated that mariadb recovers with your Ussuri branch and >>>> this branch has these openstack-helm commits, I support letting >>>> Ussuri merge into stx.4.0. >>>> >>>> Frank >>>> [1] https://storyboard.openstack.org/#!/story/2004712 >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Friday, June 05, 2020 2:36 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> As for OpenStack not recovering after both controllers are reset [1] >>>> I could not reproduce this issue with my Ussuri upgrade EB. >>>> My test step is: >>>> 1) ssh to standby controller and sudo reboot -f for it. >>>> 2) sudo reboot -f for activated controller All pods can resume after >>>> a while. >>>> >>>> However, I could reproduce this issue with DB 20200516T080009Z. >>>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>>> [2] early last year. >>>> >>>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>>> It includes below 2 patches which fixed this stability issue. >>>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>>> avoid state management thread death >>>> >>>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月3日 22:35 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> This is not a new requirement.  Users expect the software to recover >>>> when resets occur. >>>> >>>> As I had mentioned at the PTG yesterday I know personally that this >>>> test passed in stx3.0 before the upversion to train. Someone else >>>> who performs testing can look to determine when this test was done >>>> as part of feature testing after train was delivered as it should >>>> have been tested as part of stx.3.0 as well.  I do not know when >>>> this started to break.  One topic we will discuss at the PTG >>>> tomorrow will be how to improve our test coverage and automation so >>>> this type of issue can be found immediately as new code is being >>>> delivered. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Wednesday, June 03, 2020 10:28 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Frank, >>>> >>>> Have we pass this case before?  Is it a new requirement? >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月3日 22:12 >>>> To: Miller, Frank ; Liu, ZhipengS >>>> ; starlingx-discuss at lists.starlingx.io; >>>> Church, Robert >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Yong/Zhipeng - the LP for openstack not recovering after both >>>> controllers are reset is >>>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>>> >>>> Ovidiu is investigating and will provide any updates from his >>>> investigation.  Please continue to keep us informed of your >>>> investigation. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: Tuesday, June 02, 2020 10:38 PM >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> We used a build from May 28. >>>> >>>> As for the decoupling issue these are actively being worked. If you >>>> run the system helm-override-show command when the stx-openstack app >>>> is applied you won’t see the CLI command fail.  It only fails when >>>> you try a helm-override-show when the app is in uploaded state.  In >>>> any case this will be fixed shortly. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Tuesday, June 02, 2020 10:04 PM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> Thanks for your quick update! >>>> Which build are you using to test this case? >>>> Since decoupling commits introduced several regressions (at least >>>> 2),  not propose to do this kind of stability test with latest build. >>>> BTW, do we have plan to revert them considering this stability risk? >>>> Our Ussuri upgrade patches is waiting for it☹ >>>> >>>> Furthermore, we have not seen this test case that force reboot both >>>> controllers at the same time. Is it a new requirement? If not , have >>>> we pass this case before, which build? >>>> I'd like to help on it with the pass build for comparative analysis. >>>> From my point , mariadb might not work if we reboot both controllers. >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月3日 8:55 >>>> To: Miller, Frank ; Liu, ZhipengS >>>> ; starlingx-discuss at lists.starlingx.io; >>>> Church, Robert >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> An update on our testing and analysis today.  We are able to >>>> reproduce the issue with OpenStack not recovering when we trigger a >>>> reboot of both AIO controllers at the same time. This results in >>>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>>> openstack commands not working indefinitely after the controllers >>>> recover.  We'll create a launchpad tomorrow to track this issue. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: Tuesday, June 02, 2020 12:25 PM >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Thanks Zhipeng for the analysis.  What is challenging here is the >>>> multitude of issues. >>>> >>>> In our debug of openstack the past few days we are seeing the app >>>> fail completely.  After investigation this issue is a Day 1 >>>> containerd issue.  This is tracked in LP: >>>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>>> >>>> The issue you are seeing on a swact is a new and very recent issue >>>> tied to the decoupling commits that were merged late last week.  Bob >>>> is investigating and I expect he'll have a fix soon for that. >>>> >>>> But the issues we are most concerned with are when we see mariadb >>>> crashing and not able to recover or with openstack services not >>>> working for longer periods of time.  We're attempting to isolate the >>>> sequence of events that trigger this. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Tuesday, June 02, 2020 11:47 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>>              Unable to unlock controller after swact and lock w/ >>>> openstack applied I also tested with daily build 20200516T080009Z. >>>> However, it could not be reproduced. >>>> We should  fix this regression ASAP! >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年6月2日 16:48 >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank and all, >>>> >>>> Update for issue 2. >>>> I raised a new LP to track it. >>>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>>> Below is the time statistics. It seems reasonable. No obvious issue >>>> found. >>>> 1) 3~4min for host restart and get ready. >>>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>>> configmap sync is ready) >>>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>>> a little, as it can retry quickly to connect ovs-vsctl: >>>> unix:/var/run/openvswitch/db.sock) >>>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>>> on ovs-db. ) Any comment? >>>> >>>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>>              Unable to unlock controller after swact and lock w/ >>>> openstack applied >>>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>>              system helm-override-show stx-openstack mariadb >>>> openstack crash  It seems related to openstack plugin decouple >>>> related patches. Should be a regression. >>>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>>> you pls help further check it and your patches, thanks! >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年6月1日 16:20 >>>> To: 'Miller, Frank' ; >>>> 'starlingx-discuss at lists.starlingx.io' >>>> ; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> I also tested the issue 2 with latest daily build on duplex setup. >>>> The conclusion is that the issue is there all the time. >>>> This issue might not be fixed soon, but should not block OpenStack >>>> upgrade, right? >>>> >>>> For 9 OpenStack patches below, I have removed all workflow-1, except >>>> the first patch and add depends-on all them. >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>>> Your review and comments are welcome! >>>> >>>> As for issue 2, some detail info FYI. >>>> It also needs to wait for around 10 min before all pods are ready >>>> again after reboot for master build. >>>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>>> my OpenStack upgrade engineering build. >>>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>>> openvswitch-db) >>>>       openvswitch-db-8fxkw >>>> Related key logs below. >>>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>>> failed to sync secret cache: timed out waiting for the condition >>>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>>> sync configmap cache: timed out waiting for the condition >>>>    Warning  FailedMount  105s               kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>>> failed to sync secret cache: timed out waiting for the condition >>>>    Warning  FailedMount  105s               kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>>> sync configmap cache: timed out waiting for the condition >>>>    Warning  Unhealthy    30s                kubelet, controller-1 >>>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>>> database connection failed (Permission denied) >>>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>>> Readiness probe failed: ovs-vsctl: >>>> unix:/var/run/openvswitch/db.sock: database connection failed >>>> (Permission denied) >>>> >>>> Is it the same stability issue as the one reported from your test >>>> team?  I can only see this issue after force rebooting. What is our >>>> expected recovery time? >>>> Your comment is appreciated! >>>> >>>> Thanks! >>>> Zhipeng >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年5月29日 9:42 >>>> To: 'Miller, Frank' ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> Glad to see your quick reply!! >>>> For OpenStack upgrade task, we have finished all test and get >>>> patches ready for more than 2 weeks, but no any review comments and >>>> feedback from your side.  What's the next step? >>>> >>>> For issue # 2,  in community meeting notes,  I saw that you had some >>>> stability issue from WR local test team. But so far, I do not see >>>> any LP for the detail info. You should ask them to do that!  Right? >>>> >>>> According to your concern, I tried to reproduce it with my build >>>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>>> issue [1] was not seen any more, mariadb got ready quickly, no >>>> regression. >>>> >>>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年5月29日 1:07 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Thanks Zhipeng. >>>> >>>> Good to see progress on IPv6. >>>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>>> there a LP open on this issue?  Which pods are not ready? What can >>>> you tell us about this 10 minute outage? >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Thursday, May 28, 2020 5:06 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> Nicolae already added test case description. Thanks Nicolae! >>>> >>>> I also did below test on AIO-DX virtual setup, exactly according to >>>> your mentioned steps. >>>> No issue found, but just need to wait for around 10 min before all >>>> pods are ready again after reboot. >>>> >>>> For ipv6 issue, I have submitted new patch for it since dynamic >>>> override for database config did not work. >>>>   https://review.opendev.org/#/c/731461/ >>>>   https://review.opendev.org/#/c/731470/ >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年5月27日 22:43 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> Thanks for the info.  You have provided the # of testcases but not >>>> what those testcase do.  Where can I find a description of what the >>>> OpenStack testcases do? >>>> >>>> For the controller reset testcases I'd like to see the test result >>>> for the following: >>>> Is openstack usable during the following scenarios on AIO-DX and on >>>> Standard configurations: >>>> - Lock/unlock of standby controller >>>> - reset (ie: reboot -f) of the standby controller >>>> - reset (ie: reboot -f) of the active controller >>>> - reapply of stx-openstack after the above scenarios >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Wednesday, May 27, 2020 9:15 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> We have done below tests. >>>> 1) Sanity tests by Nicolae. >>>> AIO - Simplex >>>> Setup                                    04 TCs [PASS] Provisioning >>>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>>> Platform                 07 TCs [PASS] >>>> >>>> TOTAL: [ 61 TCs ] >>>> >>>> AIO - Duplex >>>> Setup                                    04 TCs [PASS] Provisioning >>>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>>> Platform                 07 TCs [PASS] >>>> >>>> TOTAL: [ 64 TCs ] >>>> >>>> Standard - Local Storage (2+2) >>>> Setup                                    04 TCs [PASS] Provisioning >>>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>>> Platform                 08 TCs [PASS] >>>> >>>> TOTAL: [ 65 TCs ] >>>> >>>> Standard External - Dedicated Storage (2+2+2) Setup >>>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>>> Sanity OpenStack             52 TCs [PASS] Sanity Platform >>>> 09 TCs [PASS] >>>> >>>> TOTAL: [ 66 TCs ] >>>> >>>> 2) NFV scenario test by me >>>>      on duplex/multi standard virtual setup >>>>            duplex bare metal setup >>>> ===== Setup >>>> ==================================================================== >>>> ============================================================= >>>> 2020-05-14 02:30:05.524  Create flavor small >>>> ........................................ [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>>> .............................. [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor small_swap >>>> ................................... [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>>> ......................... [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium >>>> ....................................... [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>>> ............................. [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>>> .................................. [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>>> ........................ [OKAY] >>>> 2020-05-14 02:30:05.653  Create image cirros >>>> ........................................ [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros >>>> ....................................... [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>>> ............................. [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>>> .................................. [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>>> ........................ [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume empty_volume >>>> ................................. [OKAY] >>>> 2020-05-14 02:30:05.786  Create network internal >>>> .................................... [OKAY] >>>> 2020-05-14 02:30:06.158  Create network external >>>> .................................... [OKAY] >>>> 2020-05-14 02:30:06.772  Create subnet internal >>>> ..................................... [OKAY] >>>> 2020-05-14 02:30:07.661  Create subnet external >>>> ..................................... [OKAY] >>>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>>> ................................... [OKAY] >>>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>>> ......................... [OKAY] >>>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>>> .............................. [OKAY] >>>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>>> .................... [OKAY] >>>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>>> ............................. [OKAY] >>>> 2020-05-14 02:31:21.241  Create instance >>>> cirros-image-with-volumes-1  ................ [OKAY] >>>> ==================================================================== >>>> ==================================================================== >>>> ===== ===== Test Iteration 0 (single-execution) >>>> ==================================================================== >>>> =============================== >>>> 2020-05-14 02:33:04.172  Test Instance-Pause >>>> ........................................ [OKAY]  (2020-05-14 >>>> 02:33:18.078 Δ=0:00:12.870) >>>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>>> ...................................... [OKAY]  (2020-05-14 >>>> 02:33:41.608 Δ=0:00:05.866) >>>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>>> ...................................... [OKAY]  (2020-05-14 >>>> 02:33:59.546 Δ=0:00:05.792) >>>> 2020-05-14 02:34:11.103  Test Instance-Resume >>>> ....................................... [OKAY]  (2020-05-14 >>>> 02:34:17.756 Δ=0:00:05.937) >>>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>>> Δ=0:02:15.748) >>>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>>> Δ=0:00:11.704) >>>> 2020-05-14 02:37:30.673  Test Instance-Stop >>>> ......................................... [OKAY]  (2020-05-14 >>>> 02:38:44.543 Δ=0:01:13.220) >>>> 2020-05-14 02:39:00.481  Test Instance-Start >>>> ........................................ [OKAY]  (2020-05-14 >>>> 02:39:07.198 Δ=0:00:06.068) >>>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>>> Δ=0:00:22.306) >>>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>>> Δ=0:01:24.179) >>>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>>> Δ=0:00:05.884) >>>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>>> Δ=0:00:21.637) >>>> 2020-05-14 02:43:52.320  Test Instance-Resize >>>> ....................................... [OKAY]  (2020-05-14 >>>> 02:45:16.409 Δ=0:01:22.812) >>>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>>> Δ=0:00:05.777) >>>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>>> Δ=0:00:21.748) >>>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>>> ...................................... [OKAY]  (2020-05-14 >>>> 02:48:59.762 Δ=0:01:12.980) >>>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>>> >>>> 3) Another 2 test >>>>      a) Using IPv6 >>>>           It can pass with workaround now.  I need one more fix for it. >>>>           In my previous patch https://review.opendev.org/#/c/716524 >>>> (merged), I dynamically override below >>>>              config_override: | >>>>                  [mysqld] >>>>                  bind_address=:: >>>>           However, it did not work now. From log,  it shows error >>>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>>> line: 1'" >>>>           I tried many methods, but could not remove the first line >>>> in 20-override.cnf >>>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>>> 20-override.cnf >>>>                  |- >>>>                  [mysqld] >>>>                  bind_address=:: >>>>          I can only add it in manifest.yaml as a static override >>>> like below. >>>>                 values: >>>>                    conf: >>>>                        database: >>>>                            config_override: | >>>>                                [mysqld] >>>>                                bind_address=:: >>>>                   b) Reset of controllers and check status of >>>> OpenStack while a controller is rebooting. >>>>           I have tested it and pass on simplex. >>>>           For duplex, I have a setup issue in my side. >>>>           @Jascanu, Nicolae  Could you help me on it for duplex >>>> test, if you have time today. Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年5月26日 21:13 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> Can you publish the list of tests that have been run for openstack? >>>> >>>> Also has openstack been tested for the following scenarios: >>>> 1) Using IPv6 >>>> 2) Reset of controllers and check status of openstack while a >>>> controller is rebooting? >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Monday, May 25, 2020 3:14 AM >>>> To: starlingx-discuss at lists.starlingx.io >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi all, >>>> >>>> We have passed all sanity test on all setup. Thanks Nicolae!! >>>> We also built out OpenStack service images from layered build >>>> environment. >>>> >>>> Please help to review and push below patches to be merged, thanks! >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>>> us) >>>> >>>> BRs >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年5月14日 16:49 >>>> To: 'Saul Wold' ; >>>> 'starlingx-discuss at lists.starlingx.io' >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi all, >>>> >>>> Call for patch review again! >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>>> us) >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年5月9日 8:38 >>>> To: Saul Wold ; >>>> starlingx-discuss at lists.starlingx.io >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Agree! >>>> >>>> -----Original Message----- >>>> From: Saul Wold >>>> Sent: 2020年5月9日 0:29 >>>> To: starlingx-discuss at lists.starlingx.io >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> I would strengthen that to no changes until we get Green Sanity >>>> other than what's required to make them Green. >>>> >>>> Full Stop! >>>> >>>> Sau! >>>> >>>> >>>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>>> Until we can get sanity passing for several days in a row I >>>>> strongly suggest we do not allow any further changes into the load >>>>> related to OpenStack.  Folks can continue with reviews but let’s >>>>> hold off allowing merges related to a new OpenStack version. >>>>> >>>>> Frank >>>>> >>>>> *From:*Liu, ZhipengS >>>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>>> *To:* starlingx-discuss >>>>> *Cc:* YU CHENGDE ; Penney, Don >>>>> >>>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>>> for patch review!! >>>>> >>>>> Hi all, >>>>> >>>>> Please help to review OpenStack Ussuri upgrade patches. >>>>> >>>>> Our target is to get all below patches merged by end of next week. >>>>> >>>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>>> tus >>>>> :merged) >>>>> >>>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>>> to >>>>> python3.6 for OpenStack services as ussuri release only support >>>>> python3. >>>>> >>>>> We also rebased openstack-helm/helm-infra to latest version. >>>>> >>>>> Engineering build test status. >>>>> >>>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>>> PASS. >>>>> >>>>> Thanks! >>>>> >>>>> Zhipeng >>>>> >>>>> >>>>> _______________________________________________ >>>>> Starlingx-discuss mailing list >>>>> Starlingx-discuss at lists.starlingx.io >>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>>> s >>>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Mon Jun 22 18:57:24 2020 From: scott.little at windriver.com (Scott Little) Date: Mon, 22 Jun 2020 14:57:24 -0400 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: <32670a41-c1dc-50b9-9297-4bb0af1cb308@windriver.com> References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> <32670a41-c1dc-50b9-9297-4bb0af1cb308@windriver.com> Message-ID: <7771f37d-8d01-143c-368d-7cb0f58eba55@windriver.com> In short, ussuri should not be merged. Scott On 2020-06-22 1:32 p.m., Scott Little wrote: > The ussuri build only passed because I created customized build > scripts for that feature branch.  This was to prove that the python2/3 > issues were the only issues with the build.  I was in no way > committing to deliver those customizations into the master branch build. > > 1) We would loose the ability to build on older branches without > significant extra effort. > > 2) It would be very fragile.  It relies on hard code list of packages > that are to be compiled for python3 vs python2.  I'm sure that list > will be changing over time. > > What I would like to see is build-stx-images.sh modified to look for > and consume a config file that tells it how to partition wheels and > images into two separate builds.  Externally, the command remains a > single invocation with no new arguments. The config file could then be > modified to individually shift images from python2 to python3 build > method without having to tinker with cengn build scripts every time > there is a change. > > Scott From ildiko.vancsa at gmail.com Mon Jun 22 19:10:48 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 22 Jun 2020 21:10:48 +0200 Subject: [Starlingx-discuss] Q2 Open Infrastructure Community Meetings - June 25th Message-ID: Hi StarlingX Community, The Q2 OSF Community meetings are only three days away! The project teams are holding two community meetings: one on Thursday, June 25th at 8am PT and another one on the same day at 7pm PT (June 26th at 10am Beijing Time). Here I have attached the calendar invites for the two community meetings. Join us: • Thursday, June 25 at 8am PT (1500 UTC) • Moderator • Jimmy McArthur, OSF • Presenters / Open for Questions • Airship: Alex Hughes • Kata Containers: Eric Ernst • OpenInfra Labs: Michael Daitzman • OpenStack: Ghanshyam Mann • OpenStack 10th birthday: Sunny Cai • StarlingX: Bruce Jones • Zuul: Monty Taylor • OpenInfra / OpenStack Days: Allison Price • Open Infrastructure Summit: Erin Disney • PTG: Kendall Waters • OpenDev: Ashlee Ferguson • Thursday, June 25 at 7pm PT (June 26th at at 0200 UTC/10am China Standard Time) • Moderator: • Sunny Cai, OSF • Presenters / Open for Questions • Airship: Alex Hughes • Kata Containers: Xu Wang • OpenInfra Labs: Michael Daitzman • OpenStack: Rico Lin • OpenStack 10th birthday: Sunny Cai • StarlingX: Yong Hu • Zuul: Clark Boylan • OpenInfra / OpenStack Days: Horace Li • Open Infrastructure Summit: Erin Disney • PTG: Kendall Waters • OpenDev: Ashlee Ferguson Zoom links for each meeting can be found in the calendar hold or here: https://etherpad.opendev.org/p/OSF_Community_Meeting_Q2 See you there! Thanks, Ildikó -------------- next part -------------- A non-text attachment was scrubbed... Name: OSF Community Meeting 1 (Password: OSF).ics Type: text/calendar Size: 1259 bytes Desc: not available URL: -------------- next part -------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: OSF Community Meeting 2 (Password: OSF).ics Type: text/calendar Size: 1271 bytes Desc: not available URL: -------------- next part -------------- From sgw at linux.intel.com Mon Jun 22 20:02:51 2020 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 22 Jun 2020 13:02:51 -0700 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: <7771f37d-8d01-143c-368d-7cb0f58eba55@windriver.com> References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> <32670a41-c1dc-50b9-9297-4bb0af1cb308@windriver.com> <7771f37d-8d01-143c-368d-7cb0f58eba55@windriver.com> Message-ID: <9636dfa8-b1b1-9e9b-0e2c-584a3d36e77e@linux.intel.com> On 6/22/20 11:57 AM, Scott Little wrote: > In short, ussuri should not be merged. > So, your saying that the 5 or so patches that merged to openstack-armada-app need to be reverted? I don't think the build will succeed with a partial set of Ussuri related patches merged.. Sau! > Scott > > > On 2020-06-22 1:32 p.m., Scott Little wrote: >> The ussuri build only passed because I created customized build >> scripts for that feature branch.  This was to prove that the python2/3 >> issues were the only issues with the build.  I was in no way >> committing to deliver those customizations into the master branch build. >> >> 1) We would loose the ability to build on older branches without >> significant extra effort. >> >> 2) It would be very fragile.  It relies on hard code list of packages >> that are to be compiled for python3 vs python2.  I'm sure that list >> will be changing over time. >> >> What I would like to see is build-stx-images.sh modified to look for >> and consume a config file that tells it how to partition wheels and >> images into two separate builds.  Externally, the command remains a >> single invocation with no new arguments. The config file could then be >> modified to individually shift images from python2 to python3 build >> method without having to tinker with cengn build scripts every time >> there is a change. >> >> Scott > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Mon Jun 22 23:59:25 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 22 Jun 2020 23:59:25 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: <7771f37d-8d01-143c-368d-7cb0f58eba55@windriver.com> References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> <32670a41-c1dc-50b9-9297-4bb0af1cb308@windriver.com> <7771f37d-8d01-143c-368d-7cb0f58eba55@windriver.com> Message-ID: Hi Scott and Frank, So far, we have merged 5 patches. They are for openstack-helm/helm-infra rebasing patch set. So, it should be OK not to revert below 5 patches now. It will not block daily build and sanity test. Upgrade openstack-helm-infra Upgrade openstack-helm Update manifest.yaml file for openstack-helm upgrade. Update download list for openstack-helm upgrade Fix render error in cinder during openstack-helm rebase For rest 4 patches. 3 unmerged patches are for ussuri OpenStack upgrade. 1 unmerged patch for ipv6 fix @Scott Little For your script change proposal, I will consider it today and add you to review. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月23日 2:57 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In short, ussuri should not be merged. Scott On 2020-06-22 1:32 p.m., Scott Little wrote: > The ussuri build only passed because I created customized build > scripts for that feature branch. This was to prove that the python2/3 > issues were the only issues with the build. I was in no way > committing to deliver those customizations into the master branch build. > > 1) We would loose the ability to build on older branches without > significant extra effort. > > 2) It would be very fragile. It relies on hard code list of packages > that are to be compiled for python3 vs python2. I'm sure that list > will be changing over time. > > What I would like to see is build-stx-images.sh modified to look for > and consume a config file that tells it how to partition wheels and > images into two separate builds. Externally, the command remains a > single invocation with no new arguments. The config file could then be > modified to individually shift images from python2 to python3 build > method without having to tinker with cengn build scripts every time > there is a change. > > Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jun 23 06:54:40 2020 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 23 Jun 2020 06:54:40 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 6/24/2020 Message-ID: Hi All: Please find the agenda for 6/24: Agenda for 6/24 meeting: - stx.4.0 story check https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.4.0&tags=stx.distro.other&project_group_id=86 - Auto version integ/kernel repo: https://review.opendev.org/#/c/733459/ - ceph containerization: - centos8: - bugs https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other - open please add if any other topic you want to discuss. https://etherpad.opendev.org/p/stx-distro-other Thanks. BR Austin Sun. From zhipengs.liu at intel.com Tue Jun 23 10:55:30 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 23 Jun 2020 10:55:30 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> <32670a41-c1dc-50b9-9297-4bb0af1cb308@windriver.com> <7771f37d-8d01-143c-368d-7cb0f58eba55@windriver.com> Message-ID: Hi Scott, I have submitted one patch according to your proposal, please review and add your comment. https://review.opendev.org/#/c/737456/ The patch has been verified by Chant, thanks! You can cherry-pick it and trigger Cengn build again if no problem. Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月23日 7:59 To: 'Scott Little' ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott and Frank, So far, we have merged 5 patches. They are for openstack-helm/helm-infra rebasing patch set. So, it should be OK not to revert below 5 patches now. It will not block daily build and sanity test. Upgrade openstack-helm-infra Upgrade openstack-helm Update manifest.yaml file for openstack-helm upgrade. Update download list for openstack-helm upgrade Fix render error in cinder during openstack-helm rebase For rest 4 patches. 3 unmerged patches are for ussuri OpenStack upgrade. 1 unmerged patch for ipv6 fix @Scott Little For your script change proposal, I will consider it today and add you to review. Thanks! Zhipeng -----Original Message----- From: Scott Little > Sent: 2020年6月23日 2:57 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In short, ussuri should not be merged. Scott On 2020-06-22 1:32 p.m., Scott Little wrote: > The ussuri build only passed because I created customized build > scripts for that feature branch. This was to prove that the python2/3 > issues were the only issues with the build. I was in no way > committing to deliver those customizations into the master branch build. > > 1) We would loose the ability to build on older branches without > significant extra effort. > > 2) It would be very fragile. It relies on hard code list of packages > that are to be compiled for python3 vs python2. I'm sure that list > will be changing over time. > > What I would like to see is build-stx-images.sh modified to look for > and consume a config file that tells it how to partition wheels and > images into two separate builds. Externally, the command remains a > single invocation with no new arguments. The config file could then be > modified to individually shift images from python2 to python3 build > method without having to tinker with cengn build scripts every time > there is a change. > > Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Jun 23 12:25:10 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 23 Jun 2020 12:25:10 +0000 Subject: [Starlingx-discuss] No StarlingX Containerization meeting today Message-ID: I am cancelling today's planned meeting for Containerization. All stories for stx4.0 are complete and the remaining work for stx.4.0 is for the primes to address their gating LPs. If anyone has any topics to discuss please use the mailing list. Frank Containers PL -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Jun 23 13:21:20 2020 From: scott.little at windriver.com (Scott Little) Date: Tue, 23 Jun 2020 09:21:20 -0400 Subject: [Starlingx-discuss] Build error in stx 3.0 In-Reply-To: References: <0dd8105b-db7d-6363-8ac4-17e4aced38fa@windriver.com> Message-ID: <4975f14b-d8f6-af5e-c9fb-870332b8c2f2@windriver.com> I can't reproduce your issue.  The basic sequence ...   repo init   repo sync   download_mirrors.sh ...   check logs/*missing*   generate-cgcs-centos-repo.sh ... works for me. On 2020-06-18 12:38 a.m., N, Poornima Y wrote: > > Hi Scott, > > Yes, MY_REPO_ROOT_DIR is pointing to the right directory. > > Below is the output for echo $MY_REPO_ROOT_DIR with username as pyn: > > /localdisk/designer/pyn/stx > > Thanks, > > Poornima > > *From:*Scott Little > *Sent:* Thursday, June 18, 2020 8:42 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Build error in stx 3.0 > > Is your MY_REPO_ROOT_DIR environment variable pointing to the right > place ? > > generate-cgcs-centos-repo.sh should be creating a repo based on the > content of > $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools/rpms_centos.lst . > > Scott > > On 2020-06-17 8:47 p.m., N, Poornima Y wrote: > > Hi all, > > I’m facing build issue on StarlingX 3.0. Below are the steps > > I have sync the code using below repo command: > > *repo init -u https://opendev.org/starlingx/manifest > -m default.xml -b r/stx.3.0* > > ** > > I get Missing targets for below src, when give the command > *generate-cgcs-centos-repo.sh > /import/mirrors/CentOS/stx-r1/CentOS/pike/*: > > *Missing targets:* > > *ntp-4.2.6p5-28.el7.centos.src.rpm* > > *systemd-219-62.el7_6.5.src.rpm* > > ** > > When I  look into stx-tools/centos-mirror-tools/rpms_centos.lst > file, the following src are mentioned: > > *systemd-219-67.el7.src.rpm* > > *ntp-4.2.6p5-29.el7.centos.src.rpm* > > ** > > *Notice that, there is a mismatch in the version!* > > ** > > If I proceed with build-pkgs following is the error I get*:* > > *17:30:55 ============ Build failed =============** > 17:30:55 b5: ERROR: build_dir (417): Invalid srpm path > 'mirror:Source/systemd-219-67.el7.src.rpm', evaluated as > '/localdisk/designer/pyn/stx/cgcs-root/cgcs-centos-repo/Source/systemd-219-67.el7.src.rpm', > found in > '/localdisk/designer/pyn/stx/cgcs-root/stx/integ/base/systemd/centos/srpm_path' > 17:30:55 ERROR: reaper (1304): Failed to build src.rpm from source > at 'b5' > 17:30:55* > > ** > > Can anyone point me as to how to resolve this error?. Am I missing > anything? > > ** > > *Thanks and Regards,* > > *Poornima Y N* > > ** > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Tue Jun 23 15:02:06 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 23 Jun 2020 15:02:06 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200622T230405Z Message-ID: Sanity Test from 2020-June-23 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200622T230405Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200622T230405Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D64988.62C36200] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20512 bytes Desc: image003.png URL: From Bill.Zvonar at windriver.com Tue Jun 23 16:02:26 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 23 Jun 2020 16:02:26 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 24, 2020) Message-ID: Hi all, reminder of the TSC/Community call coming up tomorrow. Please feel free to add items to the agenda [0] for the Community call beforehand. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200624T1400 From austin.sun at intel.com Wed Jun 24 03:03:05 2020 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 24 Jun 2020 03:03:05 +0000 Subject: [Starlingx-discuss] only host can share screen in zoom 342730236 Message-ID: Hi Ildiko : It seems zoom update some policy , zoom meeting 342730236 only allowed host to share the screen. Would you like to help check ? Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From poornima.y.n at intel.com Wed Jun 24 09:22:21 2020 From: poornima.y.n at intel.com (N, Poornima Y) Date: Wed, 24 Jun 2020 09:22:21 +0000 Subject: [Starlingx-discuss] Build error in stx 3.0 In-Reply-To: <4975f14b-d8f6-af5e-c9fb-870332b8c2f2@windriver.com> References: <0dd8105b-db7d-6363-8ac4-17e4aced38fa@windriver.com> <4975f14b-d8f6-af5e-c9fb-870332b8c2f2@windriver.com> Message-ID: Hi Scott, Thanks for the reply. I found out the mistake I had done. The tools project used to create the base container image was of master and I synched the code of stx3.0 inside container. Hence, there were errors as said before. After resolving this, everything is going on good. Best Regards, Poornima From: Scott Little Sent: Tuesday, June 23, 2020 6:51 PM To: N, Poornima Y ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Build error in stx 3.0 I can't reproduce your issue. The basic sequence ... repo init repo sync download_mirrors.sh ... check logs/*missing* generate-cgcs-centos-repo.sh ... works for me. On 2020-06-18 12:38 a.m., N, Poornima Y wrote: Hi Scott, Yes, MY_REPO_ROOT_DIR is pointing to the right directory. Below is the output for echo $MY_REPO_ROOT_DIR with username as pyn: /localdisk/designer/pyn/stx Thanks, Poornima From: Scott Little Sent: Thursday, June 18, 2020 8:42 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Build error in stx 3.0 Is your MY_REPO_ROOT_DIR environment variable pointing to the right place ? generate-cgcs-centos-repo.sh should be creating a repo based on the content of $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools/rpms_centos.lst . Scott On 2020-06-17 8:47 p.m., N, Poornima Y wrote: Hi all, I'm facing build issue on StarlingX 3.0. Below are the steps I have sync the code using below repo command: repo init -u https://opendev.org/starlingx/manifest -m default.xml -b r/stx.3.0 I get Missing targets for below src, when give the command generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/: Missing targets: ntp-4.2.6p5-28.el7.centos.src.rpm systemd-219-62.el7_6.5.src.rpm When I look into stx-tools/centos-mirror-tools/rpms_centos.lst file, the following src are mentioned: systemd-219-67.el7.src.rpm ntp-4.2.6p5-29.el7.centos.src.rpm Notice that, there is a mismatch in the version! If I proceed with build-pkgs following is the error I get: 17:30:55 ============ Build failed ============= 17:30:55 b5: ERROR: build_dir (417): Invalid srpm path 'mirror:Source/systemd-219-67.el7.src.rpm', evaluated as '/localdisk/designer/pyn/stx/cgcs-root/cgcs-centos-repo/Source/systemd-219-67.el7.src.rpm', found in '/localdisk/designer/pyn/stx/cgcs-root/stx/integ/base/systemd/centos/srpm_path' 17:30:55 ERROR: reaper (1304): Failed to build src.rpm from source at 'b5' 17:30:55 Can anyone point me as to how to resolve this error?. Am I missing anything? Thanks and Regards, Poornima Y N _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Jun 24 13:07:01 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 24 Jun 2020 15:07:01 +0200 Subject: [Starlingx-discuss] only host can share screen in zoom 342730236 In-Reply-To: References: Message-ID: Hi Austin, Thanks for bringing this up. I updated my Zoom settings, please let me know if you still see the issue. Thanks, Ildikó > On Jun 24, 2020, at 05:03, Sun, Austin wrote: > > Hi Ildiko : > It seems zoom update some policy , zoom meeting 342730236 only allowed host to share the screen. > Would you like to help check ? > > > Thanks. > BR > Austin Sun. > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From austin.sun at intel.com Wed Jun 24 13:38:29 2020 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 24 Jun 2020 13:38:29 +0000 Subject: [Starlingx-discuss] only host can share screen in zoom 342730236 In-Reply-To: References: Message-ID: Hi lldiko: Thanks. it is working now . BR Austin Sun. -----Original Message----- From: Ildiko Vancsa Sent: Wednesday, June 24, 2020 9:07 PM To: Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] only host can share screen in zoom 342730236 Hi Austin, Thanks for bringing this up. I updated my Zoom settings, please let me know if you still see the issue. Thanks, Ildikó > On Jun 24, 2020, at 05:03, Sun, Austin wrote: > > Hi Ildiko : > It seems zoom update some policy , zoom meeting 342730236 only allowed host to share the screen. > Would you like to help check ? > > > Thanks. > BR > Austin Sun. > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Wed Jun 24 14:30:36 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 24 Jun 2020 14:30:36 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 24, 2020) In-Reply-To: References: Message-ID: >From today's meeting... * Standing Topics * Sanity/Build * Ussuri Build * pending issue re: Python 2 v 3 .whl files * Scott/Zhipeng working through a review to address, once it merges Zhipeng can unleash the remaining Ussuri commits * Scott noted that the test build should be done using the Layered Build, he'll send an email * 3.x Build * some CVEs were backported, ran into an issue that was seen recently on master re: mock changes * Saul raised https://bugs.launchpad.net/starlingx/+bug/1884944 to address AR: request Davlet to port his previous fix to 3.0 * Gerrit Reviews in Need of Attention https://review.opendev.org/735485 for centos8 feature branch https://review.opendev.org/736075 for centos8 feature branch * Topics for this Week * July 1 is Canada Day - next week's meeting will be lightly/not attended by Canucks or Hosers * Saul will run next week * ARs from Previous Meetings * nothing new from last week * Open Requests for Help * http://lists.starlingx.io/pipermail/starlingx-discuss/2020-June/008953.html * a couple of longer interactions on IRC with born2bake and Admin0 (an operator) * AR: Bill will respond on the mailing list, attempt to farm out the Qs to the right groups -----Original Message----- From: Zvonar, Bill Sent: Tuesday, June 23, 2020 12:02 PM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (June 24, 2020) Hi all, reminder of the TSC/Community call coming up tomorrow. Please feel free to add items to the agenda [0] for the Community call beforehand. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200624T1400 From build.starlingx at gmail.com Wed Jun 24 14:50:36 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 24 Jun 2020 10:50:36 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_post_installer - Build # 806 - Failure! Message-ID: <1698811497.1839.1593010236961.JavaMail.javamailuser@localhost> Project: STX_build_post_installer Build #: 806 Status: Failure Timestamp: 20200624T143213Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200624T120013Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200624T120013Z DOCKER_BUILD_ID: jenkins-ussuri-20200624T120013Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200624T120013Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200624T120013Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri From alexandru.dimofte at intel.com Wed Jun 24 17:54:33 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 24 Jun 2020 17:54:33 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200624T023601Z Message-ID: Sanity Test from 2020-June-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200624T023601Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200624T023601Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image002.png at 01D64A69.A5F840E0] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 20512 bytes Desc: image002.png URL: From ildiko.vancsa at gmail.com Wed Jun 24 18:22:59 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 24 Jun 2020 20:22:59 +0200 Subject: [Starlingx-discuss] [2020 Summit] Programming Committee Nominations Open Message-ID: <66ED0905-5184-436E-9DDB-79B0221747E0@gmail.com> Hi StarlingX Community, Programming Committee nominations for the 2020 Open Infrastructure Summit are open! Programming Committees for each Track will help build the Summit schedule, and are made up of individuals working in open infrastructure. Responsibilities include: • Help the Summit team put together the best possible content based on your subject matter expertise • Promote the individual Tracks within your networks • Review the submissions and Community voting results in your particular Track • Determine if there are any major content gaps in your Track, and if so, potentially solicit additional speakers directly to submit • Ensure diversity of speakers and companies represented in your Track • Avoid vendor sales pitches, focusing more on real-world user stories and technical, in-the-trenches experiences 2020 Summit Tracks: • 5G, NFV & Edge • AI, Machine Learning & HPC • CI/CD • Container Infrastructure • Getting Started • Hands-on Workshops • Open Development • Private & Hybrid Cloud • Public Cloud • Security If you’re interested in nominating yourself or someone else to be a member of the Summit Programming Committee for a specific Track, please __fill out the nomination form[1]__. Nominations will close on July 10, 2020. Programming Committee selections will occur before we open the Call for Presentations (CFP) to receive presentations so that the Committees can host office hours to consult on submissions, and help promote the event. The CFP will be open July 1 - August 4, 2020. Please email speakersupport at openstack.org with any questions or feedback. Thanks, Ildikó [1] https://openstackfoundation.formstack.com/forms/programmingcommitteenom_summit2020 From sgw at linux.intel.com Wed Jun 24 20:20:13 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 24 Jun 2020 13:20:13 -0700 Subject: [Starlingx-discuss] Multi-OS meeting 6/25 @ 1430 UTC Message-ID: <0b2c6339-a7db-7213-6ba1-827f50221c7d@linux.intel.com> We will be meeting tomorrow at 7:30am PT [0] Call Details [1] Zoom link: https://zoom.us/j/342730236 Dialing in from phone: Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 Meeting ID: 342 730 236 International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes The agenda and notes for each call are kept in Etherpads [2]: Agenda: - Status Update & Next Steps - Troubleshooting as needed - Requests for Help Sau! [0] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200625T1430 [1] https://wiki.openstack.org/wiki/Starlingx/Meetings#Call_details_13 [2] https://etherpad.openstack.org/p/stx-multios From build.starlingx at gmail.com Thu Jun 25 01:03:51 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 24 Jun 2020 21:03:51 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 236 - Failure! Message-ID: <740945261.1845.1593047031929.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 236 Status: Failure Timestamp: 20200624T175224Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200624T145038Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200624T145038Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200624T145038Z/logs MASTER_BUILD_NUMBER: 29 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200624T145038Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200624T145038Z DOCKER_BUILD_ID: jenkins-ussuri-20200624T145038Z-builder TIMESTAMP: 20200624T145038Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200624T145038Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200624T145038Z/outputs From build.starlingx at gmail.com Thu Jun 25 01:03:53 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 24 Jun 2020 21:03:53 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 29 - Failure! Message-ID: <622541736.1848.1593047033941.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 29 Status: Failure Timestamp: 20200624T145038Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200624T145038Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From scott.little at windriver.com Thu Jun 25 02:29:46 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 24 Jun 2020 22:29:46 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 29 - Failure! In-Reply-To: <622541736.1848.1593047033941.JavaMail.javamailuser@localhost> References: <622541736.1848.1593047033941.JavaMail.javamailuser@localhost> Message-ID: <2e026a00-389f-7a79-9618-37aa96710e46@windriver.com> Error on my part.  I didn't set up the revised ussuri patch correctly.  Problem fixed.  Starting another run. Scott On 2020-06-24 9:03 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_ussuri > Build #: 29 > Status: Failure > Timestamp: 20200624T145038Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200624T145038Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Thu Jun 25 02:47:11 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 25 Jun 2020 02:47:11 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 29 - Failure! In-Reply-To: <2e026a00-389f-7a79-9618-37aa96710e46@windriver.com> References: <622541736.1848.1593047033941.JavaMail.javamailuser@localhost> <2e026a00-389f-7a79-9618-37aa96710e46@windriver.com> Message-ID: Scott, Just checked the log. Loci not set python3 arguement normally.(You already found the cause😊) Python2 service build pass, seems https://review.opendev.org/737456 works. Please cherry pick below patches with latest version in order. https://review.opendev.org/731461 https://review.opendev.org/712862 https://review.opendev.org/#/c/712880/ https://review.opendev.org/#/c/719427/ https://review.opendev.org/737456 Thanks! Zhipeng From: Scott Little Sent: 2020年6月25日 10:30 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 29 - Failure! Error on my part. I didn't set up the revised ussuri patch correctly. Problem fixed. Starting another run. Scott On 2020-06-24 9:03 p.m., build.starlingx at gmail.com wrote: Project: STX_build_master_ussuri Build #: 29 Status: Failure Timestamp: 20200624T145038Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200624T145038Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Thu Jun 25 09:42:51 2020 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Thu, 25 Jun 2020 09:42:51 +0000 Subject: [Starlingx-discuss] Where did my RAM go? Message-ID: <4e78f11deabf49c785d077454bca21f5@swsn.de> Hi! i have real RAM problems at my worker nodes, they are equipped with 256GB RAM. unfortunately i can only use a fraction of it for my VMs and i don't understand why? maybe someone can explain to me where all the RAM went? compute-0:~$ cat /proc/meminfo MemTotal: 263854428 kB MemFree: 25030424 kB MemAvailable: 24584388 kB Buffers: 35308 kB Cached: 350236 kB SwapCached: 0 kB Active: 27306600 kB Inactive: 236124 kB Active(anon): 27168144 kB Inactive(anon): 8936 kB Active(file): 138456 kB Inactive(file): 227188 kB Unevictable: 5424 kB Mlocked: 5424 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 132 kB Writeback: 0 kB AnonPages: 27162864 kB Mapped: 99272 kB Shmem: 15880 kB Slab: 912816 kB SReclaimable: 132552 kB SUnreclaim: 780264 kB KernelStack: 40256 kB PageTables: 107948 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 27069612 kB Committed_AS: 47150896 kB VmallocTotal: 34359738367 kB VmallocUsed: 823656 kB VmallocChunk: 34224520432 kB HardwareCorrupted: 0 kB CmaTotal: 16384 kB CmaFree: 2114464 kB HugePages_Total: 200 HugePages_Free: 200 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB DirectMap4k: 18644 kB DirectMap2M: 5122048 kB DirectMap1G: 263192576 kB compute-0:~$ free total used free shared buff/cache available Mem: 263854428 237596844 24955580 15880 1302004 24512988 Swap: 0 0 0 there are two running VMs, one is allocate 32GB and the other one 512MB but the server has only 24GB left? whats going on? this is an dual CPU system with 16x16GB slots full filled. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jun 25 09:49:10 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 25 Jun 2020 05:49:10 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 237 - Still Failing! In-Reply-To: <310406194.1843.1593047030290.JavaMail.javamailuser@localhost> References: <310406194.1843.1593047030290.JavaMail.javamailuser@localhost> Message-ID: <1704060024.1857.1593078551164.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 237 Status: Still Failing Timestamp: 20200625T052313Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T023042Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200625T023042Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T023042Z/logs MASTER_BUILD_NUMBER: 30 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200625T023042Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200625T023042Z DOCKER_BUILD_ID: jenkins-ussuri-20200625T023042Z-builder TIMESTAMP: 20200625T023042Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200625T023042Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200625T023042Z/outputs From build.starlingx at gmail.com Thu Jun 25 09:49:12 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 25 Jun 2020 05:49:12 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 30 - Still Failing! In-Reply-To: <1516131645.1846.1593047032424.JavaMail.javamailuser@localhost> References: <1516131645.1846.1593047032424.JavaMail.javamailuser@localhost> Message-ID: <1807421696.1860.1593078553752.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 30 Status: Still Failing Timestamp: 20200625T023042Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T023042Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From zhipengs.liu at intel.com Thu Jun 25 12:05:49 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 25 Jun 2020 12:05:49 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 30 - Still Failing! In-Reply-To: <1807421696.1860.1593078553752.JavaMail.javamailuser@localhost> References: <1516131645.1846.1593047032424.JavaMail.javamailuser@localhost> <1807421696.1860.1593078553752.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, Sorry, it was caused by my patch this time. I have fixed it. https://review.opendev.org/#/c/737456/ Please update it and retrigger build again, git fetch https://review.opendev.org/starlingx/root refs/changes/56/737456/11 && git cherry-pick FETCH_HEAD Thanks! Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月25日 17:49 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 30 - Still Failing! Project: STX_build_master_ussuri Build #: 30 Status: Still Failing Timestamp: 20200625T023042Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T023042Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From build.starlingx at gmail.com Thu Jun 25 13:04:41 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 25 Jun 2020 09:04:41 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 469 - Failure! Message-ID: <1740668639.1863.1593090282094.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 469 Status: Failure Timestamp: 20200625T112232Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/distro/20200625T111832Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri-distro/20200625T111832Z DOCKER_BUILD_ID: jenkins-ussuri-distro-20200625T111832Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/ussuri-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/distro/20200625T111832Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/distro/20200625T111832Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_ussuri LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri-distro BUILD_ISO: false From scott.little at windriver.com Thu Jun 25 13:08:28 2020 From: scott.little at windriver.com (Scott Little) Date: Thu, 25 Jun 2020 09:08:28 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 30 - Still Failing! In-Reply-To: References: <1516131645.1846.1593047032424.JavaMail.javamailuser@localhost> <1807421696.1860.1593078553752.JavaMail.javamailuser@localhost> Message-ID: I have pulled in your revised patch, and restarted the build. scott On 2020-06-25 8:05 a.m., Liu, ZhipengS wrote: > Hi Scott, > > Sorry, it was caused by my patch this time. I have fixed it. > https://review.opendev.org/#/c/737456/ > Please update it and retrigger build again, > > git fetch https://review.opendev.org/starlingx/root refs/changes/56/737456/11 && git cherry-pick FETCH_HEAD > > Thanks! > Zhipeng > > -----Original Message----- > From: build.starlingx at gmail.com > Sent: 2020年6月25日 17:49 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 30 - Still Failing! > > Project: STX_build_master_ussuri > Build #: 30 > Status: Still Failing > Timestamp: 20200625T023042Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T023042Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: true From alexandru.dimofte at intel.com Thu Jun 25 13:15:18 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 25 Jun 2020 13:15:18 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200625T021632Z Message-ID: Sanity Test from 2020-June-25 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200625T021632Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200625T021632Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D64B0B.CFCBC610] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20512 bytes Desc: image003.png URL: From build.starlingx at gmail.com Thu Jun 25 16:14:59 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 25 Jun 2020 12:14:59 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_repo_sync_layered - Build # 514 - Failure! Message-ID: <1669487006.1867.1593101700608.JavaMail.javamailuser@localhost> Project: STX_repo_sync_layered Build #: 514 Status: Failure Timestamp: 20200625T161345Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/flock/20200625T161205Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MANIFEST: flock.xml PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/flock/20200625T161205Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/flock/20200625T161205Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri-flock From build.starlingx at gmail.com Thu Jun 25 16:15:02 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 25 Jun 2020 12:15:02 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_ussuri - Build # 1 - Failure! Message-ID: <1998352137.1870.1593101702595.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_ussuri Build #: 1 Status: Failure Timestamp: 20200625T161205Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/flock/20200625T161205Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From scott.little at windriver.com Thu Jun 25 18:00:51 2020 From: scott.little at windriver.com (Scott Little) Date: Thu, 25 Jun 2020 14:00:51 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_ussuri - Build # 1 - Failure! In-Reply-To: <1998352137.1870.1593101702595.JavaMail.javamailuser@localhost> References: <1998352137.1870.1593101702595.JavaMail.javamailuser@localhost> Message-ID: My error During manual prep for the test run, looks like a forgot to repo sync before I applied the git cherry-pick, and didn't notice the cherry-pick had reported a conflict before triggering the job After a repo sync, the cherry-pick went in cleanly. The build has been relaunched. Scott On 2020-06-25 12:15 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_layer_flock_master_ussuri > Build #: 1 > Status: Failure > Timestamp: 20200625T161205Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/flock/20200625T161205Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From maryx.camp at intel.com Thu Jun 25 18:12:24 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 25 Jun 2020 18:12:24 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 2020-06-24 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Our tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 2020-06-24   . All -- reviews merged since last meeting:  5 . All -- bug status -- 10 total, 4 WIP o [ww26] 1 new: update the build guide for STX R3.0  AR Mary reply to Poornima, clarify the issue. Should we point to release notes for version info?  o [ww25] 2: Change $Home in build docs and Remove hardcoded "r1" in build scripts & docs o [ww24] 2: Document build-pkg command [WIP] and Document packaging esp. build-srpm.data o [ww23] Fix search function. [WIP] First review enabled search of OpenStack docs: https://review.opendev.org/#/c/733383/  [Cross job to test changes: https://review.opendev.org/#/c/733483/] o [ww23] Add instructions for building stx-openstack application [WIP]  OK to merge? Ping Saul, wait 24 hours, then merge. o [ww20] Networking documentation [not started] o [ww16] Build Avoidance [WIP] https://docs.starlingx.io/developer_resources/build_guide.html#build-avoidance) . Reviews in progress:    o Layered Build guide (Poornima). Email questions about why this doc is needed. AR Mary reply to email thread, propose title change "Layered Build Reference" for Scott's original guide.  OK to merge? Ping Saul, wait 24 hours, then merge. o Chinese document for layered build https://review.opendev.org/#/c/726737/  Yong Fu has made changes. Requested Yi Wang / Austin to give technical +1 before merge. o Rook migration editorial  NOT officially in r4.0 but code will merge quickly afterwards. AR MARY figure out how to handle. Should be tied to the code merge -- Orig review 723291 has link to story https://storyboard.openstack.org/#!/story/2005527 o Modifying layered build commands (add pike / remove pike)  This review is valid for the current situation: https://review.opendev.org/#/c/717424/  . Saul's review is valid for "future" situation -- we think will be merged in next couple of weeks https://review.opendev.org/#/c/693761/  . All -- R4 target content: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.4.0&project_id=1046 o Assignment of R4 doc tasks . These stories have no assigned person:    . https://storyboard.openstack.org/#!/story/2007222 (update dist cloud bare metal install w/ redfish info)  AR Mary check discuss archive, may be covered already? If not ask Tao Liu.  . https://storyboard.openstack.org/#!/story/2006588 (updates for Dist Cloud)  AR Mary ask Dariush (reference the story) Do we need doc updates? . All -- Opens o Search functionality priority 1 -- must be fixed for R4. AR Mary follow up with Ildiko and the OpenStack folks who helped with theme update. . Short discussion about future planning:  [17Jun20] keep here for reference. . 1. After R4 release, we will branch the documentation. This will get docs set up for item 2. . 2. Upstream the Wind River Cloud Platform docs. This activity is still in planning/resource gathering stage.  . Branching the STX Docs - notes from discussion 20May20 - keep here for reference.  o Recommendation from Bart to plan a method for versioning the documentation. The current approach has these issues:   . People are making updates to the docs for previous releases in the master branch, but not in the release branch. So if someone goes to look at the docs in the r/stx.3.0 branch, they will get stale info. . Our docs web site does not allow users to see info for previous releases for some areas. For example, our REST API  Reference (https://docs.starlingx.io/api-ref/index.html) is just showing master (I think). To see the r/stx.3.0 REST APIs, the user would have to go to each repo (e.g. metal, config, nfv) and choose the branch there. That isn't a good way to access these docs. . Now we only build the master version of docs. We want to change that for future. We want the web page to allow selecting different versions like the examples above.  o Our plan is to keep updating docs in master like we're doing now. After R4 is released, then we'd create an R4 branch and cut over to the new method.  . Ask Scott/Saul to include docs in branch process when they do it. [We think they're doing this already, because someone created an r/stx.3.0 branch.] . Once we have 4.0 branch, delete all the old release folders and have only one version of the docs that we keep up to date.  . The existing R3 branch is just a throwaway because it's not updated at all. [Delete old branches, unversion the current branch.] . After this is implemented, if master is updated with something that applies to previous releases (like a bug fix), you'd have to make a similar change in the specific branch.  From zhipengs.liu at intel.com Fri Jun 26 00:41:50 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 26 Jun 2020 00:41:50 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> <32670a41-c1dc-50b9-9297-4bb0af1cb308@windriver.com> <7771f37d-8d01-143c-368d-7cb0f58eba55@windriver.com> Message-ID: Hi all, Now we have passed cengn build for ussuri! http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T130609Z/logs/jenkins-STX_build_docker_flock_images-212.log.html http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T130609Z/logs/jenkins-STX_build_master_ussuri-31.log.html Thanks for the great help from Scott and Chant! Nicolae also reported latest EB passed sanity test this Monday! Thanks! We need core guys to push rest 5 Ussuri patches to be merged now. @'Scott Little' @Friesen, Chris@Church, Robert https://review.opendev.org/731461 https://review.opendev.org/712862 https://review.opendev.org/#/c/712880/ https://review.opendev.org/#/c/719427/ https://review.opendev.org/737456 https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月23日 18:56 To: 'Scott Little' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have submitted one patch according to your proposal, please review and add your comment. https://review.opendev.org/#/c/737456/ The patch has been verified by Chant, thanks! You can cherry-pick it and trigger Cengn build again if no problem. Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月23日 7:59 To: 'Scott Little' >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott and Frank, So far, we have merged 5 patches. They are for openstack-helm/helm-infra rebasing patch set. So, it should be OK not to revert below 5 patches now. It will not block daily build and sanity test. Upgrade openstack-helm-infra Upgrade openstack-helm Update manifest.yaml file for openstack-helm upgrade. Update download list for openstack-helm upgrade Fix render error in cinder during openstack-helm rebase For rest 4 patches. 3 unmerged patches are for ussuri OpenStack upgrade. 1 unmerged patch for ipv6 fix @Scott Little For your script change proposal, I will consider it today and add you to review. Thanks! Zhipeng -----Original Message----- From: Scott Little > Sent: 2020年6月23日 2:57 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In short, ussuri should not be merged. Scott On 2020-06-22 1:32 p.m., Scott Little wrote: > The ussuri build only passed because I created customized build > scripts for that feature branch. This was to prove that the python2/3 > issues were the only issues with the build. I was in no way > committing to deliver those customizations into the master branch build. > > 1) We would loose the ability to build on older branches without > significant extra effort. > > 2) It would be very fragile. It relies on hard code list of packages > that are to be compiled for python3 vs python2. I'm sure that list > will be changing over time. > > What I would like to see is build-stx-images.sh modified to look for > and consume a config file that tells it how to partition wheels and > images into two separate builds. Externally, the command remains a > single invocation with no new arguments. The config file could then be > modified to individually shift images from python2 to python3 build > method without having to tinker with cengn build scripts every time > there is a change. > > Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Jun 26 00:58:25 2020 From: scott.little at windriver.com (Scott Little) Date: Thu, 25 Jun 2020 20:58:25 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 31 Pass! Message-ID: <28a2c80e-cbc7-a4fc-d6e0-87803c871e52@windriver.com> CENGN monolithic and layered builds of the ussuri patch set have passed. From the build point of view, I think we can merge this patch series . Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Jun 26 01:30:59 2020 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 25 Jun 2020 18:30:59 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 31 Pass! In-Reply-To: <28a2c80e-cbc7-a4fc-d6e0-87803c871e52@windriver.com> References: <28a2c80e-cbc7-a4fc-d6e0-87803c871e52@windriver.com> Message-ID: Great job everyone, I know alot of effort when it to that from Scott, Zhipeng and Chant across the community. Thanks to everyone! Sau! On 6/25/20 5:58 PM, Scott Little wrote: > CENGN monolithic > > and layered > > builds of the ussuri patch set have passed. > > From the build point of view, I think we can merge this patch series > . > > Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From allison at openstack.org Thu Jun 25 18:49:50 2020 From: allison at openstack.org (Allison Price) Date: Thu, 25 Jun 2020 13:49:50 -0500 Subject: [Starlingx-discuss] OSF Community Meeting 1 Recording & Slides Message-ID: <9F111A9A-5820-4461-8002-430EFBCB2545@openstack.org> Hi everyone, Thank you for joining today’s OSF Community Meeting. If you missed the meeting, you have a few options on getting updated on what was covered. You can listen to the recording [1] and check out the slides [2] that were presented this morning. I have also attached a PDF of the slides if that’s easier to access. There will be a second OSF Community Meeting covering the same material tomorrow, Friday, June 26 at 0200 UTC (today, June 25 at 7pm PT). You can find the lineup of speakers and dial-in information here [3]. Stay tuned for the next all project, quarterly update that will be held in September. Cheers, Allison [1] https://zoom.us/rec/share/vJF_FqPgxGJJQ9bntR7vaqM7N7i_X6a81yQa8vtcxU06amK9pV9imWJnfHRSUcQ6 Password: 7W!T*i74 [2] https://docs.google.com/presentation/d/16V82OIYfthb3fFlVoes9jZGKMgDIJZ55F8fXqd1M1hU/edit?usp=sharing [3] https://etherpad.opendev.org/p/OSF_Community_Meeting_Q2 Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: June 2020 Community Update (1).pdf Type: application/pdf Size: 2634642 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Fri Jun 26 12:12:16 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Fri, 26 Jun 2020 12:12:16 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 31 Pass! In-Reply-To: References: <28a2c80e-cbc7-a4fc-d6e0-87803c871e52@windriver.com> Message-ID: Agreed, good works guys. -----Original Message----- From: Saul Wold Sent: Thursday, June 25, 2020 9:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 31 Pass! Great job everyone, I know alot of effort when it to that from Scott, Zhipeng and Chant across the community. Thanks to everyone! Sau! On 6/25/20 5:58 PM, Scott Little wrote: > CENGN monolithic > ithic/20200625T130609Z/> > and layered > iners/20200625T182404Z/> builds of the ussuri patch set have passed. > > From the build point of view, I think we can merge this patch series > . > > Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From alexandru.dimofte at intel.com Fri Jun 26 18:46:23 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 26 Jun 2020 18:46:23 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200626T020412Z Message-ID: Sanity Test from 2020-June-26 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200626T020412Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200626T020412Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D64C03.30C07360] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20512 bytes Desc: image003.png URL: From zhipengs.liu at intel.com Sat Jun 27 00:13:37 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Sat, 27 Jun 2020 00:13:37 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> <32670a41-c1dc-50b9-9297-4bb0af1cb308@windriver.com> <7771f37d-8d01-143c-368d-7cb0f58eba55@windriver.com> Message-ID: Hi Bob and Chris, Could you help add W+1 for the first patch below, so that we can merge all patches together. https://review.opendev.org/731461 https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月26日 8:42 To: 'Scott Little' ; 'starlingx-discuss at lists.starlingx.io' ; 'Friesen, Chris' ; 'Church, Robert' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Now we have passed cengn build for ussuri! http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T130609Z/logs/jenkins-STX_build_docker_flock_images-212.log.html http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T130609Z/logs/jenkins-STX_build_master_ussuri-31.log.html Thanks for the great help from Scott and Chant! Nicolae also reported latest EB passed sanity test this Monday! Thanks! We need core guys to push rest 5 Ussuri patches to be merged now. @'Scott Little' @Friesen, Chris@Church, Robert https://review.opendev.org/731461 https://review.opendev.org/712862 https://review.opendev.org/#/c/712880/ https://review.opendev.org/#/c/719427/ https://review.opendev.org/737456 https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月23日 18:56 To: 'Scott Little' >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have submitted one patch according to your proposal, please review and add your comment. https://review.opendev.org/#/c/737456/ The patch has been verified by Chant, thanks! You can cherry-pick it and trigger Cengn build again if no problem. Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月23日 7:59 To: 'Scott Little' >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott and Frank, So far, we have merged 5 patches. They are for openstack-helm/helm-infra rebasing patch set. So, it should be OK not to revert below 5 patches now. It will not block daily build and sanity test. Upgrade openstack-helm-infra Upgrade openstack-helm Update manifest.yaml file for openstack-helm upgrade. Update download list for openstack-helm upgrade Fix render error in cinder during openstack-helm rebase For rest 4 patches. 3 unmerged patches are for ussuri OpenStack upgrade. 1 unmerged patch for ipv6 fix @Scott Little For your script change proposal, I will consider it today and add you to review. Thanks! Zhipeng -----Original Message----- From: Scott Little > Sent: 2020年6月23日 2:57 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In short, ussuri should not be merged. Scott On 2020-06-22 1:32 p.m., Scott Little wrote: > The ussuri build only passed because I created customized build > scripts for that feature branch. This was to prove that the python2/3 > issues were the only issues with the build. I was in no way > committing to deliver those customizations into the master branch build. > > 1) We would loose the ability to build on older branches without > significant extra effort. > > 2) It would be very fragile. It relies on hard code list of packages > that are to be compiled for python3 vs python2. I'm sure that list > will be changing over time. > > What I would like to see is build-stx-images.sh modified to look for > and consume a config file that tells it how to partition wheels and > images into two separate builds. Externally, the command remains a > single invocation with no new arguments. The config file could then be > modified to individually shift images from python2 to python3 build > method without having to tinker with cengn build scripts every time > there is a change. > > Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Sat Jun 27 02:24:46 2020 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Sat, 27 Jun 2020 02:24:46 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - June 25/2020 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases stx.4.0 - Ussuri Status >> MS-3 Status - It feels like we are only hours away vs days. - Currently testing build scripts that should handle images built w/ python2 as well as python3. - Note: Team in Shanghai is on holiday Thursday/Friday - Focus has been on monolithic build - If we run into issues w/ layered build, we will consider turning off layered build for stx.4.0 - Forecast to merge is Monday 29. - Feature Test Update - https://docs.google.com/spreadsheets/d/1C9n4aRQT7xMyTDCT5sfuZGNI9ermAX5BYRypzcCpQ6U/edit#gid=968103774 - Remaining feature test: - FPGA Integration - 6/26 On track. Status: Some issues reported & fixed. Going well. - TSN Support in Kata Container - 6/23 Moved by 1wk; seeing core dumps on application. New Date: 7/2 - Openstack Rebase to Ussuri - 7/3 On track. Status: No big issues to highlight - Regression Test Update - https://docs.google.com/spreadsheets/d/1gA3bnLS7aY2y8dKxm4MuqpWyELq3PVJMYtiHn4IWiAk/edit#gid=1717644237 - Need two weeks after Ussuri merges. New Forecast: 7/10 - 7/13 - Bug Backlog - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.4.0 - Current Count: 57 - Community members/teams are focusing on bug resolution - Release Date / Backend - Given the delay in declaring the MS-3 milestone, need to look at either compressing the backend of the schedule or pushing the release date out. - Current Release Date: 7/17 - Currently carrying a risk of 2wks if we want to keep the Final Regression duration to the planned 2wks. Otherwise, we would have to accept some risk and reduce the final regression. - Bruce: Would feel more comfortable being aggressive on the date if we can put a plan in place to deliver fixes via the patching or upgrades mechanism - These mechanisms exist, but the community effort to regularly create, verify and release binary patches/increments would be too high. - Discussed being flexible with the date as long as we are close to our initial plans -- in the same month - Decided to monitor progress over the next week and discuss further in the next release meeting. - RC1 Branch Creation - New Target: 7/7 From alexandru.dimofte at intel.com Sat Jun 27 19:24:59 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sat, 27 Jun 2020 19:24:59 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200627T013412Z Message-ID: Sanity Test from 2020-June-27 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200627T013412Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200627T013412Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image002.png at 01D64CD1.C8A10540] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 20512 bytes Desc: image002.png URL: From shuicheng.lin at intel.com Sat Jun 27 23:52:15 2020 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Sat, 27 Jun 2020 23:52:15 +0000 Subject: [Starlingx-discuss] Need code review for oidc-auth-armada-app and openstack-armada-app project Message-ID: Hi all, Could you help review below 2 patches for centos 8 feature branch? There is no review comment for more than 1 week. Thanks. https://review.opendev.org/735485 https://review.opendev.org/736075 Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Church at windriver.com Sun Jun 28 05:15:13 2020 From: Robert.Church at windriver.com (Church, Robert) Date: Sun, 28 Jun 2020 05:15:13 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> <32670a41-c1dc-50b9-9297-4bb0af1cb308@windriver.com> <7771f37d-8d01-143c-368d-7cb0f58eba55@windriver.com> Message-ID: <3454B3EA-C358-4B17-B89A-1DD63FF79A38@windriver.com> Done. Code is merged. Thanks, Bob From: "Liu, ZhipengS" Date: Friday, June 26, 2020 at 7:13 PM To: "'starlingx-discuss at lists.starlingx.io'" , "Friesen, Chris" , Robert Church Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Bob and Chris, Could you help add W+1 for the first patch below, so that we can merge all patches together. https://review.opendev.org/731461 https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月26日 8:42 To: 'Scott Little' ; 'starlingx-discuss at lists.starlingx.io' ; 'Friesen, Chris' ; 'Church, Robert' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Now we have passed cengn build for ussuri! http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T130609Z/logs/jenkins-STX_build_docker_flock_images-212.log.html http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200625T130609Z/logs/jenkins-STX_build_master_ussuri-31.log.html Thanks for the great help from Scott and Chant! Nicolae also reported latest EB passed sanity test this Monday! Thanks! We need core guys to push rest 5 Ussuri patches to be merged now. @'Scott Little' @Friesen, Chris@Church, Robert https://review.opendev.org/731461 https://review.opendev.org/712862 https://review.opendev.org/#/c/712880/ https://review.opendev.org/#/c/719427/ https://review.opendev.org/737456 https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月23日 18:56 To: 'Scott Little' >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have submitted one patch according to your proposal, please review and add your comment. https://review.opendev.org/#/c/737456/ The patch has been verified by Chant, thanks! You can cherry-pick it and trigger Cengn build again if no problem. Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月23日 7:59 To: 'Scott Little' >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott and Frank, So far, we have merged 5 patches. They are for openstack-helm/helm-infra rebasing patch set. So, it should be OK not to revert below 5 patches now. It will not block daily build and sanity test. Upgrade openstack-helm-infra Upgrade openstack-helm Update manifest.yaml file for openstack-helm upgrade. Update download list for openstack-helm upgrade Fix render error in cinder during openstack-helm rebase For rest 4 patches. 3 unmerged patches are for ussuri OpenStack upgrade. 1 unmerged patch for ipv6 fix @Scott Little For your script change proposal, I will consider it today and add you to review. Thanks! Zhipeng -----Original Message----- From: Scott Little > Sent: 2020年6月23日 2:57 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In short, ussuri should not be merged. Scott On 2020-06-22 1:32 p.m., Scott Little wrote: > The ussuri build only passed because I created customized build > scripts for that feature branch. This was to prove that the python2/3 > issues were the only issues with the build. I was in no way > committing to deliver those customizations into the master branch build. > > 1) We would loose the ability to build on older branches without > significant extra effort. > > 2) It would be very fragile. It relies on hard code list of packages > that are to be compiled for python3 vs python2. I'm sure that list > will be changing over time. > > What I would like to see is build-stx-images.sh modified to look for > and consume a config file that tells it how to partition wheels and > images into two separate builds. Externally, the command remains a > single invocation with no new arguments. The config file could then be > modified to individually shift images from python2 to python3 build > method without having to tinker with cengn build scripts every time > there is a change. > > Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sun Jun 28 12:01:27 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 28 Jun 2020 08:01:27 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_repo_sync_layered - Build # 536 - Failure! Message-ID: <55107674.1879.1593345688090.JavaMail.javamailuser@localhost> Project: STX_repo_sync_layered Build #: 536 Status: Failure Timestamp: 20200628T120117Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/compiler/20200628T120017Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MANIFEST: compile.xml PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/compiler/20200628T120017Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/compiler/20200628T120017Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri-compiler From build.starlingx at gmail.com Sun Jun 28 12:01:29 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 28 Jun 2020 08:01:29 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_compiler_master_ussuri - Build # 8 - Failure! Message-ID: <167377149.1882.1593345690052.JavaMail.javamailuser@localhost> Project: STX_build_layer_compiler_master_ussuri Build #: 8 Status: Failure Timestamp: 20200628T120017Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/compiler/20200628T120017Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From alexandru.dimofte at intel.com Sun Jun 28 15:58:05 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sun, 28 Jun 2020 15:58:05 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200628T013415Z Message-ID: Sanity Test from 2020-June-28 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200628T013415Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200628T013415Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D64D7E.0B689670] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20507 bytes Desc: image003.png URL: From bnovickovs at weecodelab.com Sun Jun 28 18:09:53 2020 From: bnovickovs at weecodelab.com (bnovickovs at weecodelab.com) Date: Sun, 28 Jun 2020 19:09:53 +0100 Subject: [Starlingx-discuss] How can I expose container workloads in kubernetes which are running on worker nodes? Message-ID: Hi folks, Question is simple as that: I want to run some workloads in Kubernetes and expose them via NodePort or LoadBalancer service (via MetalLB). Its pretty easy to do on controller-0/1 nodes since they are connected to OAM network. Thus, I am wondering how this can be done on worker nodes which are connected to mgnmt network and data networks (but data networks can be used in conjunction with openstack only). Thank you From build.starlingx at gmail.com Sun Jun 28 23:03:52 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 28 Jun 2020 19:03:52 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_distro_master_master - Build # 168 - Failure! Message-ID: <2027489635.1887.1593385432903.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 168 Status: Failure Timestamp: 20200628T230208Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200628T230208Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From zhipengs.liu at intel.com Mon Jun 29 02:00:47 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 29 Jun 2020 02:00:47 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_compiler_master_ussuri - Build # 8 - Failure! In-Reply-To: <167377149.1882.1593345690052.JavaMail.javamailuser@localhost> References: <167377149.1882.1593345690052.JavaMail.javamailuser@localhost> Message-ID: Scott, It seems caused by patch already merge. You may need update build info setting and retrigger it again. Thanks! Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月28日 20:01 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_layer_compiler_master_ussuri - Build # 8 - Failure! Project: STX_build_layer_compiler_master_ussuri Build #: 8 Status: Failure Timestamp: 20200628T120017Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/compiler/20200628T120017Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From zhipengs.liu at intel.com Mon Jun 29 02:58:23 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 29 Jun 2020 02:58:23 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> Message-ID: Hi Bob and Chris, Could you support me as a core reviewer for openstack-armada-app? I will always focus on this field and work closely with you guys. Thanks! Zhipeng -----Original Message----- From: Hu, Yong Sent: 2020年6月17日 10:24 To: Miller, Frank ; starlingx Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Thank Frank for this +1. We need another +1 from other cores. Regards, Yong On 2020/6/17, 4:22 AM, "Miller, Frank" wrote: I think it makes sense to add 1 prime to existing repos. As I see a lot of commits and review comments and emails on the mailing list from Zhipeng I would think it makes the most sense to add Zhipeng as a Core reviewer to the openstack-armada-app and stx-upstream repos. Frank -----Original Message----- From: Hu, Yong Sent: Tuesday, June 09, 2020 11:32 PM To: starlingx Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks! Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sanjay.k.mukherjee at intel.com Mon Jun 29 07:25:23 2020 From: sanjay.k.mukherjee at intel.com (Mukherjee, Sanjay K) Date: Mon, 29 Jun 2020 07:25:23 +0000 Subject: [Starlingx-discuss] Build Error in download mirror Message-ID: Hi All, Facing error in download_mirror.sh -n -g -c yum.conf.sample ERROR: Failed to download from url: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/latest_build/outputs/RPMS/std/rpm.lst Not able to progress in build .Need help to fix the download mirror error issue. Thanks and Regards, Sanjay Mukherjee -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Jun 29 11:14:23 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 29 Jun 2020 07:14:23 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 242 - Failure! Message-ID: <1365615555.1891.1593429264381.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 242 Status: Failure Timestamp: 20200629T105920Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20200629T080014Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs MASTER_BUILD_NUMBER: 595 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/monolithic PUBLISH_TIMESTAMP: 20200629T080014Z DOCKER_BUILD_ID: jenkins-master-20200629T080014Z-builder TIMESTAMP: 20200629T080014Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200629T080014Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200629T080014Z/outputs From build.starlingx at gmail.com Mon Jun 29 11:14:26 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 29 Jun 2020 07:14:26 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 595 - Failure! Message-ID: <1349636653.1894.1593429266724.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 595 Status: Failure Timestamp: 20200629T080014Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From build.starlingx at gmail.com Mon Jun 29 12:01:34 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 29 Jun 2020 08:01:34 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_repo_sync_layered - Build # 539 - Failure! Message-ID: <806808802.1897.1593432095295.JavaMail.javamailuser@localhost> Project: STX_repo_sync_layered Build #: 539 Status: Failure Timestamp: 20200629T120128Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/compiler/20200629T120018Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MANIFEST: compile.xml PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/compiler/20200629T120018Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/compiler/20200629T120018Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri-compiler From build.starlingx at gmail.com Mon Jun 29 12:01:36 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 29 Jun 2020 08:01:36 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_compiler_master_ussuri - Build # 9 - Still Failing! In-Reply-To: <1506008388.1880.1593345688558.JavaMail.javamailuser@localhost> References: <1506008388.1880.1593345688558.JavaMail.javamailuser@localhost> Message-ID: <1246049546.1900.1593432097546.JavaMail.javamailuser@localhost> Project: STX_build_layer_compiler_master_ussuri Build #: 9 Status: Still Failing Timestamp: 20200629T120018Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/compiler/20200629T120018Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From Don.Penney at windriver.com Mon Jun 29 13:34:45 2020 From: Don.Penney at windriver.com (Penney, Don) Date: Mon, 29 Jun 2020 13:34:45 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> Message-ID: +1 from me -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Sunday, June 28, 2020 10:58 PM To: Hu, Yong; Miller, Frank; starlingx; Friesen, Chris; Church, Robert Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi Bob and Chris, Could you support me as a core reviewer for openstack-armada-app? I will always focus on this field and work closely with you guys. Thanks! Zhipeng -----Original Message----- From: Hu, Yong Sent: 2020年6月17日 10:24 To: Miller, Frank ; starlingx Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Thank Frank for this +1. We need another +1 from other cores. Regards, Yong On 2020/6/17, 4:22 AM, "Miller, Frank" wrote: I think it makes sense to add 1 prime to existing repos. As I see a lot of commits and review comments and emails on the mailing list from Zhipeng I would think it makes the most sense to add Zhipeng as a Core reviewer to the openstack-armada-app and stx-upstream repos. Frank -----Original Message----- From: Hu, Yong Sent: Tuesday, June 09, 2020 11:32 PM To: starlingx Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks! Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Mon Jun 29 14:14:09 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 29 Jun 2020 14:14:09 +0000 Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 242 - Failure! In-Reply-To: <1365615555.1891.1593429264381.JavaMail.javamailuser@localhost> References: <1365615555.1891.1593429264381.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, Since ussuri patch got merged yesterday, this build failed due to not adding two additional repos for base image build. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs/jenkins-STX_build_docker_base_image-262.log.html Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月29日 19:14 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 242 - Failure! Project: STX_build_docker_images Build #: 242 Status: Failure Timestamp: 20200629T105920Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20200629T080014Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs MASTER_BUILD_NUMBER: 595 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/monolithic PUBLISH_TIMESTAMP: 20200629T080014Z DOCKER_BUILD_ID: jenkins-master-20200629T080014Z-builder TIMESTAMP: 20200629T080014Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200629T080014Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200629T080014Z/outputs From Robert.Church at windriver.com Mon Jun 29 14:35:12 2020 From: Robert.Church at windriver.com (Church, Robert) Date: Mon, 29 Jun 2020 14:35:12 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> Message-ID: <1474C9B3-5792-4BB0-9D5C-2CDE8AE79D02@windriver.com> +1 from me as well. Bob On 6/29/20, 8:34 AM, "Penney, Don" wrote: +1 from me -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Sunday, June 28, 2020 10:58 PM To: Hu, Yong; Miller, Frank; starlingx; Friesen, Chris; Church, Robert Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi Bob and Chris, Could you support me as a core reviewer for openstack-armada-app? I will always focus on this field and work closely with you guys. Thanks! Zhipeng -----Original Message----- From: Hu, Yong Sent: 2020年6月17日 10:24 To: Miller, Frank ; starlingx Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Thank Frank for this +1. We need another +1 from other cores. Regards, Yong On 2020/6/17, 4:22 AM, "Miller, Frank" wrote: I think it makes sense to add 1 prime to existing repos. As I see a lot of commits and review comments and emails on the mailing list from Zhipeng I would think it makes the most sense to add Zhipeng as a Core reviewer to the openstack-armada-app and stx-upstream repos. Frank -----Original Message----- From: Hu, Yong Sent: Tuesday, June 09, 2020 11:32 PM To: starlingx Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks! Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Mon Jun 29 19:25:21 2020 From: Don.Penney at windriver.com (Penney, Don) Date: Mon, 29 Jun 2020 19:25:21 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: <1474C9B3-5792-4BB0-9D5C-2CDE8AE79D02@windriver.com> References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> <1474C9B3-5792-4BB0-9D5C-2CDE8AE79D02@windriver.com> Message-ID: I've added Zhipeng as a core to starlingx/openstack-armada-app. Cheers, Don. -----Original Message----- From: Church, Robert Sent: Monday, June 29, 2020 10:35 AM To: Penney, Don; Liu, ZhipengS; Hu, Yong; Miller, Frank; starlingx; Friesen, Chris Subject: Re: Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream +1 from me as well. Bob On 6/29/20, 8:34 AM, "Penney, Don" wrote: +1 from me -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Sunday, June 28, 2020 10:58 PM To: Hu, Yong; Miller, Frank; starlingx; Friesen, Chris; Church, Robert Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi Bob and Chris, Could you support me as a core reviewer for openstack-armada-app? I will always focus on this field and work closely with you guys. Thanks! Zhipeng -----Original Message----- From: Hu, Yong Sent: 2020年6月17日 10:24 To: Miller, Frank ; starlingx Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Thank Frank for this +1. We need another +1 from other cores. Regards, Yong On 2020/6/17, 4:22 AM, "Miller, Frank" wrote: I think it makes sense to add 1 prime to existing repos. As I see a lot of commits and review comments and emails on the mailing list from Zhipeng I would think it makes the most sense to add Zhipeng as a Core reviewer to the openstack-armada-app and stx-upstream repos. Frank -----Original Message----- From: Hu, Yong Sent: Tuesday, June 09, 2020 11:32 PM To: starlingx Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks! Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Mon Jun 29 23:03:44 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 29 Jun 2020 19:03:44 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_distro_master_master - Build # 169 - Still Failing! In-Reply-To: <1419669209.1885.1593385431271.JavaMail.javamailuser@localhost> References: <1419669209.1885.1593385431271.JavaMail.javamailuser@localhost> Message-ID: <112084487.1908.1593471825495.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 169 Status: Still Failing Timestamp: 20200629T230200Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200629T230200Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From zhipengs.liu at intel.com Mon Jun 29 23:47:10 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 29 Jun 2020 23:47:10 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream In-Reply-To: References: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> <286C9787-E2BF-4E58-9FA9-B727293E4ACA@intel.com> <1474C9B3-5792-4BB0-9D5C-2CDE8AE79D02@windriver.com> Message-ID: Don and Bob, Thanks! Zhipeng -----Original Message----- From: Penney, Don Sent: 2020年6月30日 3:25 To: Church, Robert ; Liu, ZhipengS ; Hu, Yong ; Miller, Frank ; starlingx ; Friesen, Chris Subject: RE: Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream I've added Zhipeng as a core to starlingx/openstack-armada-app. Cheers, Don. -----Original Message----- From: Church, Robert Sent: Monday, June 29, 2020 10:35 AM To: Penney, Don; Liu, ZhipengS; Hu, Yong; Miller, Frank; starlingx; Friesen, Chris Subject: Re: Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream +1 from me as well. Bob On 6/29/20, 8:34 AM, "Penney, Don" wrote: +1 from me -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Sunday, June 28, 2020 10:58 PM To: Hu, Yong; Miller, Frank; starlingx; Friesen, Chris; Church, Robert Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi Bob and Chris, Could you support me as a core reviewer for openstack-armada-app? I will always focus on this field and work closely with you guys. Thanks! Zhipeng -----Original Message----- From: Hu, Yong Sent: 2020年6月17日 10:24 To: Miller, Frank ; starlingx Subject: Re: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Thank Frank for this +1. We need another +1 from other cores. Regards, Yong On 2020/6/17, 4:22 AM, "Miller, Frank" wrote: I think it makes sense to add 1 prime to existing repos. As I see a lot of commits and review comments and emails on the mailing list from Zhipeng I would think it makes the most sense to add Zhipeng as a Core reviewer to the openstack-armada-app and stx-upstream repos. Frank -----Original Message----- From: Hu, Yong Sent: Tuesday, June 09, 2020 11:32 PM To: starlingx Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks! Regards, Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Tue Jun 30 01:11:13 2020 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 30 Jun 2020 01:11:13 +0000 Subject: [Starlingx-discuss] Requesting input on future StarlingX Openstack Rebases Message-ID: Hello all, The following question was raised in the StarlingX Release Meeting a couple of weeks ago: How do we make future StarlingX rebases to the latest Openstack release easier and more predictable? I took the action to solicit input from the community in general and the Ussuri rebase team in particular. We've created an Etherpad to collect community input: https://etherpad.opendev.org/p/stx-openstack-rebase Please add your thoughts/ideas to the etherpad in the next couple of weeks for consideration in the stx.5.0 release planning. Thanks, Ghada On behalf of the StarlingX Release Team From yong.hu at intel.com Tue Jun 30 02:08:35 2020 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 30 Jun 2020 02:08:35 +0000 Subject: [Starlingx-discuss] Requesting input on future StarlingX Openstack Rebases In-Reply-To: References: Message-ID: <984EC270-35FF-470F-B6C3-1FB18E2D6E50@intel.com> I put some history data along this product execution on the etherpad. I like to see other feedbacks and take-aways. Regards, Yong On 2020/6/30, 9:13 AM, "Khalil, Ghada" wrote: Hello all, The following question was raised in the StarlingX Release Meeting a couple of weeks ago: How do we make future StarlingX rebases to the latest Openstack release easier and more predictable? I took the action to solicit input from the community in general and the Ussuri rebase team in particular. We've created an Etherpad to collect community input: https://etherpad.opendev.org/p/stx-openstack-rebase Please add your thoughts/ideas to the etherpad in the next couple of weeks for consideration in the stx.5.0 release planning. Thanks, Ghada On behalf of the StarlingX Release Team _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yong.hu at intel.com Tue Jun 30 02:41:57 2020 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 30 Jun 2020 02:41:57 +0000 Subject: [Starlingx-discuss] StarlingX Distro-OpenStack: Bi-weekly Project Meeting - WW27.2 Message-ID: <09A0E047-099F-4287-BBD8-7DE92ED26C90@intel.com> Hi guys, Agenda for today's meeting on stx-distro-ppenstack: 1. Overall STX status and 4.0 release progress - stx. 4.0, we are heading MS-3. stay tuned to the community status meeting or release meeting. 2. OpenStack “U” upgrade Status - BIG THANK YOU to Zhipeng at Intel and Chant at 99Cloud. 3. Testing Status on the latest build with OpenStack "U" - Nic   4. LPs review. Zoom Bridge: https://zoom.us/j/342730236 Project Etherpad: https://etherpad.opendev.org/p/stx-distro-openstack-meetings OPEN LPs on distro.openstack: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.openstack Regards, yong From haochuan.z.chen at intel.com Tue Jun 30 07:47:55 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 30 Jun 2020 07:47:55 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: , , Message-ID: Hi Dan Currently backup and restore, function breaks, exception the last time issue, I workaround by add "-overwrite=true" in bootstrap/bringup-essential-services/tasks/bringup_helm.yml line 242. And there is another issue, armada-api pod could not launch, always in pending status. I also work around by delete this pod before this task TASK [bootstrap/bringup-essential-services : Wait for 120 seconds to ensure kube-system pods are all started] Propose you check the latest code for B&R. TASK [bootstrap/bringup-essential-services : Fail if any of the Kubernetes component, Networking or Armada pods are not ready by this time] ************************************************* failed: [localhost] (item={'_ansible_parsed': True, 'stderr_lines': [u'error: timed out waiting for the condition on pods/armada-api-6b76cfdbf4-9rm9c'], u'changed': True, u'stderr': u'error: timed out waiting for the condition on pods/armada-api-6b76cfdbf4-9rm9c', u'ansible_job_id': u'567509288348.112224', u'stdout': u'', '_ansible_item_result': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'kubectl --kubeconfig=/etc/kubernetes/admin.conf wait --namespace=armada --for=condition=Ready pods --selector application=armada --timeout=30s', u'removes': None, u'argv': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'attempts': 1, u'delta': u'0:00:30.122867', 'stdout_lines': [], 'failed_when_result': False, '_ansible_no_log': False, u'end': u'2020-06-30 07:26:37.030731', '_ansible_item_label': {'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'application=armada', u'ansible_job_id': u'567509288348.112224', 'item': u'application=armada', u'started': 1, 'changed': True, 'failed': False, u'finished': 0, u'results_file': u'/root/.ansible_async/567509288348.112224', '_ansible_ignore_errors': None, '_ansible_no_log': False}, u'start': u'2020-06-30 07:26:06.907864', u'cmd': [u'kubectl', u'--kubeconfig=/etc/kubernetes/admin.conf', u'wait', u'--namespace=armada', u'--for=condition=Ready', u'pods', u'--selector', u'application=armada', u'--timeout=30s'], u'finished': 1, u'failed': False, 'item': {'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_no_log': False, u'ansible_job_id': u'567509288348.112224', 'item': u'application=armada', u'started': 1, 'changed': True, 'failed': False, u'finished': 0, u'results_file': u'/root/.ansible_async/567509288348.112224', '_ansible_ignore_errors': None, '_ansible_item_label': u'application=armada'}, u'rc': 1, u'msg': u'non-zero return code', '_ansible_ignore_errors': None}) => {"changed": false, "item": {"ansible_job_id": "567509288348.112224", "attempts": 1, "changed": true, "cmd": ["kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "wait", "--namespace=armada", "--for=condition=Ready", "pods", "--selector", "application=armada", "--timeout=30s"], "delta": "0:00:30.122867", "end": "2020-06-30 07:26:37.030731", "failed": false, "failed_when_result": false, "finished": 1, "invocation": {"module_args": {"_raw_params": "kubectl --kubeconfig=/etc/kubernetes/admin.conf wait --namespace=armada --for=condition=Ready pods --selector application=armada --timeout=30s", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true}}, "item": {"ansible_job_id": "567509288348.112224", "changed": true, "failed": false, "finished": 0, "item": "application=armada", "results_file": "/root/.ansible_async/567509288348.112224", "started": 1}, "msg": "non-zero return code", "rc": 1, "start": "2020-06-30 07:26:06.907864", "stderr": "error: timed out waiting for the condition on pods/armada-api-6b76cfdbf4-9rm9c", "stderr_lines": ["error: timed out waiting for the condition on pods/armada-api-6b76cfdbf4-9rm9c"], "stdout": "", "stdout_lines": []}, "msg": "Pod application=armada is still not ready."} localhost:~$ sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf --namespace=armada get po Password: NAME READY STATUS RESTARTS AGE armada-api-6b76cfdbf4-9rm9c 0/2 Pending 0 17m localhost:~$ BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan Sent: Friday, June 19, 2020 4:20 AM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello Martin, No, it is the first time seeing it. But I see that logic is introduced by Project: starlingx/ansible-playbooks Commit 514d4e7262f80a73ab37e0132f9e3b30088d14ad CommitDate: Wed Jun 10 13:17:00 2020 -0400 Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Thursday, June 18, 2020 4:18 AM To: Voiculeasa, Dan >; starlingx-discuss at lists.starlingx.io > Subject: RE: issue for backup and restore Hi Dan I check restore for latest code, restore will fail with such log. I used to check code base Jun 5 master branch, no such issue. You know about this? TASK [bootstrap/bringup-essential-services : Create Armada node label] ********************************************************************************************************************** fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubectl", "label", "node", "controller-0", "armada=enabled"], "delta": "0:00:00.102152", "end": "2020-06-18 00:57:32.563552", "msg": "non-zero return code", "rc": 1, "start": "2020-06-18 00:57:32.461400", "stderr": "error: 'armada' already has a value (enabled), and --overwrite is false", "stderr_lines": ["error: 'armada' already has a value (enabled), and --overwrite is false"], "stdout": "", "stdout_lines": []} PLAY RECAP ********************************************************************************************************************************************************************************** localhost : ok=354 changed=156 unreachable=0 failed=1 [sysadmin at controller-0 ~(keystone_admin)]$ BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Thursday, June 11, 2020 10:53 AM To: Voiculeasa, Dan >; starlingx-discuss at lists.starlingx.io Subject: RE: issue for backup and restore Hi voiculeasa I confirm backup and restore works without ceph backend. This issue is caused with my improper provision step. BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 9, 2020 5:54 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello Martin, I didn't encounter that issue when testing, but also I didn't test recently without ceph backend. Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph? Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Monday, June 8, 2020 5:21 AM To: Voiculeasa, Dan >; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jun 30 08:16:41 2020 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 30 Jun 2020 08:16:41 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 7/1/2020 Message-ID: Hi All: please find agenda for 7/1 meeting: Agenda for 7/1 meeting: - stx.4.0 release plan update http://lists.starlingx.io/pipermail/starlingx-discuss/2020-June/009058.html - Auto version integ/kernel repo: https://review.opendev.org/#/c/733459/ - ceph containerization: - centos8: - bugs https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other https://bugs.launchpad.net/starlingx/+bug/1884262 - open Thanks. BR Austin Sun -------------- next part -------------- An HTML attachment was scrubbed... URL: From agung at btech.id Tue Jun 30 13:04:06 2020 From: agung at btech.id (Rahmat Agung) Date: Tue, 30 Jun 2020 20:04:06 +0700 Subject: [Starlingx-discuss] Instance can't be accessed when using centrilized router? Message-ID: I deploy stx-openstack on top StarlingX 3.0. When I used distributed router, I can access my instance from floating IP, but when using centralized router, it can't. I try to find logs on neutron-server logs but there is no log (where all log saved?). I use flat network as provider network. Is it a common behavior if it only can use distributed or there is something I should add on my config? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jun 30 13:32:06 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 30 Jun 2020 13:32:06 +0000 Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 242 - Failure! In-Reply-To: References: <1365615555.1891.1593429264381.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, Not sure if you have already added these 2 additional repos(ceph/rh) for base image build on master. For daily build on master yesterday, it failed to build base image as we not add these 2 repos. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/ For daily build on master today, it did not trigger docker image build at all! (Triggered automatically every Monday?) http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200630T080011Z/ If we use this daily build to do sanity test, it will not include ussuri images. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月29日 22:14 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 242 - Failure! Hi Scott, Since ussuri patch got merged yesterday, this build failed due to not adding two additional repos for base image build. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs/jenkins-STX_build_docker_base_image-262.log.html Zhipeng -----Original Message----- From: build.starlingx at gmail.com Sent: 2020年6月29日 19:14 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 242 - Failure! Project: STX_build_docker_images Build #: 242 Status: Failure Timestamp: 20200629T105920Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20200629T080014Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs MASTER_BUILD_NUMBER: 595 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200629T080014Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/monolithic PUBLISH_TIMESTAMP: 20200629T080014Z DOCKER_BUILD_ID: jenkins-master-20200629T080014Z-builder TIMESTAMP: 20200629T080014Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200629T080014Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/monolithic/20200629T080014Z/outputs From Matt.Peters at windriver.com Tue Jun 30 13:41:58 2020 From: Matt.Peters at windriver.com (Peters, Matt) Date: Tue, 30 Jun 2020 13:41:58 +0000 Subject: [Starlingx-discuss] How can I expose container workloads in kubernetes which are running on worker nodes? In-Reply-To: References: Message-ID: Containerized services may be exposed to an external network via several methods, with considerations needed for each option. 1) Add a dedicated platform interface with static IP assignment for NodePort service type - Host interface and routes are configured using the StarlingX system commands (does not need to be associated with a specific network type) - Traffic can be exposed directly via the service NodePort or via an Ingress Controller with NodePort - DNAT is used to translate between Host IP+NodePort to Pod IP+Port 2) Configure the cluster-service network to be externally routable and advertise the network using Calico BGP or static routing - Adjacent routers would learn and distribute the cluster service network - Standard K8s service load balancing is used to distributed traffic amongst pods - DNAT is required to translate between service address and pod address - Additional References: https://docs.projectcalico.org/networking/bgp https://docs.projectcalico.org/networking/advertise-service-ips 3) Configure the cluster-pod network to be externally routable and - Pod IP is directly used as the destination IP address - Can add additional Pod IP pools to select which Pods are externally routable - No abstraction of multiple Pod Endpoints (redundancy handled at the application layer) - No proxy or NAT required to reach destination endpoint - Additional References: https://docs.projectcalico.org/networking/workloads-outside-cluster I hope this additional information helps. Regards, Matt On 2020-06-28, 2:11 PM, "bnovickovs at weecodelab.com" wrote: Hi folks, Question is simple as that: I want to run some workloads in Kubernetes and expose them via NodePort or LoadBalancer service (via MetalLB). Its pretty easy to do on controller-0/1 nodes since they are connected to OAM network. Thus, I am wondering how this can be done on worker nodes which are connected to mgnmt network and data networks (but data networks can be used in conjunction with openstack only). Thank you _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Tue Jun 30 20:58:05 2020 From: scott.little at windriver.com (Scott Little) Date: Tue, 30 Jun 2020 16:58:05 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_compiler_master_ussuri - Build # 8 - Failure! In-Reply-To: References: <167377149.1882.1593345690052.JavaMail.javamailuser@localhost> Message-ID: I have disabled all the ussuri builds. In theory we don't need them.  Do you agree? Scott On 2020-06-28 10:00 p.m., Liu, ZhipengS wrote: > Scott, > It seems caused by patch already merge. > You may need update build info setting and retrigger it again. > > Thanks! > Zhipeng > > -----Original Message----- > From: build.starlingx at gmail.com > Sent: 2020年6月28日 20:01 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [build-report] STX_build_layer_compiler_master_ussuri - Build # 8 - Failure! > > Project: STX_build_layer_compiler_master_ussuri > Build #: 8 > Status: Failure > Timestamp: 20200628T120017Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/compiler/20200628T120017Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false From build.starlingx at gmail.com Tue Jun 30 21:15:02 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 30 Jun 2020 17:15:02 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 244 - Failure! Message-ID: <188967903.1915.1593551702795.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 244 Status: Failure Timestamp: 20200630T192805Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200630T160013Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200630T160013Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200630T160013Z/logs MASTER_BUILD_NUMBER: 36 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200630T160013Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200630T160013Z DOCKER_BUILD_ID: jenkins-ussuri-20200630T160013Z-builder TIMESTAMP: 20200630T160013Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200630T160013Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200630T160013Z/outputs