From Ian.Jolliffe at windriver.com Mon Jun 1 02:07:32 2020 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Mon, 1 Jun 2020 02:07:32 +0000 Subject: [Starlingx-discuss] [TSC] Minutes 5/27 Message-ID: PTG starts tomorrow – all are welcome – Etherpad here: https://etherpad.opendev.org/p/stx-virtual-PTG-June Please put your name on the etherpad if you plan on joining. Notes from TSC Meeting: * Final prep for PTG - Starts June 1st o Airship joint session during the Monday time slot, they would prefer some time earlier (ildikov) § Airship plans to share their project changes § Do this first and then move to StarlingX PTG agenda. § Ildiko confirmed * License review process - please read prior to meeting o https://governance.openstack.org/tc/reference/licensing.html § OpenStack mandates (and ensures via a legal agreement) that all software written is made available under apache license version 2; it is possible because all code is written within the project § Some of OpenSTack's software can be considered derivative works of its dependencies, so the OpenStack Requirements team reviews the licenses of dependencies as they're added, tracked centrally in a single file: https://opendev.org/openstack/requirements/src/branch/master/global-requirements.txt § In addition Zuul has some software derived from ansible under GPL, v3: https://opendev.org/zuul/zuul#user-content-license · As stated on the page above, they make sure that the comments at the tops of individual source code files reflect the corresponding licenses for them § Are we covered on the integrated pieces - vs Flock code. Depending licenses - what else do we need to do? Cover in PTG - find a time slot, so foundation people can help guide us. We just need to document the approach. * TSC election o https://review.opendev.org/730969 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Jun 1 05:04:33 2020 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 1 Jun 2020 05:04:33 +0000 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> Message-ID: <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> Hi Ildikó, We haven't seen the dedicated zoom bridge sent for the vPTG. Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? Regards, Yong On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: Hi StarlingX Community, As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. Please let me know if you have any questions. Thanks, Ildikó [1] https://www.openstack.org/ptg/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Mon Jun 1 08:20:15 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 1 Jun 2020 08:20:15 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Mon Jun 1 11:49:35 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 1 Jun 2020 13:49:35 +0200 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> Message-ID: <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> Hi Yong, It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ Thanks, Ildikó > On Jun 1, 2020, at 07:04, Hu, Yong wrote: > > Hi Ildikó, > We haven't seen the dedicated zoom bridge sent for the vPTG. > Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? > > Regards, > Yong > > On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: > > Hi StarlingX Community, > > As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). > > Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. > > As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. > > Please let me know if you have any questions. > > Thanks, > Ildikó > > [1] https://www.openstack.org/ptg/ > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From Brent.Rowsell at windriver.com Mon Jun 1 14:13:13 2020 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 1 Jun 2020 14:13:13 +0000 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> Message-ID: I've registered but have not received a pw. Brent -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Monday, June 1, 2020 7:50 AM To: Hu, Yong Cc: starlingx Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT Hi Yong, It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ Thanks, Ildikó > On Jun 1, 2020, at 07:04, Hu, Yong wrote: > > Hi Ildikó, > We haven't seen the dedicated zoom bridge sent for the vPTG. > Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? > > Regards, > Yong > > On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: > > Hi StarlingX Community, > > As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). > > Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. > > As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. > > Please let me know if you have any questions. > > Thanks, > Ildikó > > [1] https://www.openstack.org/ptg/ > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Barton.Wensley at windriver.com Mon Jun 1 14:19:23 2020 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Mon, 1 Jun 2020 14:19:23 +0000 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> Message-ID: They are sending the password to the email you used to register (with eventbrite). For me, the password was in an email titled "24 hours left until the PTG!", which was easy to miss. Bart -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: June 1, 2020 10:13 AM To: Ildiko Vancsa; Hu, Yong Cc: starlingx Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT I've registered but have not received a pw. Brent -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Monday, June 1, 2020 7:50 AM To: Hu, Yong Cc: starlingx Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT Hi Yong, It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ Thanks, Ildikó > On Jun 1, 2020, at 07:04, Hu, Yong wrote: > > Hi Ildikó, > We haven't seen the dedicated zoom bridge sent for the vPTG. > Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? > > Regards, > Yong > > On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: > > Hi StarlingX Community, > > As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). > > Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. > > As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. > > Please let me know if you have any questions. > > Thanks, > Ildikó > > [1] https://www.openstack.org/ptg/ > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From kire at kth.se Mon Jun 1 16:16:22 2020 From: kire at kth.se (=?utf-8?B?SmFuLUVyaWsgTcOlbmdz?=) Date: Mon, 1 Jun 2020 16:16:22 +0000 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> Message-ID: <386A1444-4DA3-4000-9FB9-9C38D375BA4A@kth.se> I also didn’t receive a pw, and I can’t find any “24 hours left until the PTG!”-email either. /Jan-Erik (registered with eventbrite using my corporate email jan-erik.mangs at ericsson.com) 1 juni 2020 kl. 16:19 skrev Wensley, Barton >: They are sending the password to the email you used to register (with eventbrite). For me, the password was in an email titled "24 hours left until the PTG!", which was easy to miss. Bart -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: June 1, 2020 10:13 AM To: Ildiko Vancsa; Hu, Yong Cc: starlingx Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT I've registered but have not received a pw. Brent -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Monday, June 1, 2020 7:50 AM To: Hu, Yong > Cc: starlingx > Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT Hi Yong, It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ Thanks, Ildikó On Jun 1, 2020, at 07:04, Hu, Yong > wrote: Hi Ildikó, We haven't seen the dedicated zoom bridge sent for the vPTG. Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? Regards, Yong On 2020/5/26, 9:54 PM, "Ildiko Vancsa" > wrote: Hi StarlingX Community, As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. Please let me know if you have any questions. Thanks, Ildikó [1] https://www.openstack.org/ptg/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Jun 1 16:25:08 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 1 Jun 2020 18:25:08 +0200 Subject: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT In-Reply-To: <386A1444-4DA3-4000-9FB9-9C38D375BA4A@kth.se> References: <6DD87EF6-B67D-45C9-BBDB-1E0089B951FB@gmail.com> <24805554-6DB8-4BCF-B9AC-83D2C109EF96@intel.com> <0D0B482F-0B3B-4ED0-AAF8-5EF88C4E62B4@gmail.com> <386A1444-4DA3-4000-9FB9-9C38D375BA4A@kth.se> Message-ID: Hi, Sorry, I was running the edge session, so was limited in mails. If someone did not receive mails from Eventbrite with further details and/or still having issues please reach out in mail to the PTG helpdesk: ptg at openstack.org Thanks, Ildikó > On Jun 1, 2020, at 18:16, Jan-Erik Mångs wrote: > > I also didn’t receive a pw, and I can’t find any “24 hours left until the PTG!”-email either. > > /Jan-Erik > (registered with eventbrite using my corporate email jan-erik.mangs at ericsson.com) > > > >> 1 juni 2020 kl. 16:19 skrev Wensley, Barton : >> >> They are sending the password to the email you used to register (with eventbrite). >> >> For me, the password was in an email titled "24 hours left until the PTG!", which was easy to miss. >> >> Bart >> >> -----Original Message----- >> From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] >> Sent: June 1, 2020 10:13 AM >> To: Ildiko Vancsa; Hu, Yong >> Cc: starlingx >> Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT >> >> I've registered but have not received a pw. >> >> Brent >> >> -----Original Message----- >> From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] >> Sent: Monday, June 1, 2020 7:50 AM >> To: Hu, Yong >> Cc: starlingx >> Subject: Re: [Starlingx-discuss] Zoom bridge is not available during the vPTG (June 1-5) - IMPORTANT >> >> Hi Yong, >> >> It won’t be the regular Zoom bridge that we usually use. All registered attendees should receive instructions via email, please keep monitoring that and register if you haven’t done that yet. >> >> The reason for this is to avoid potential “Zoom bombing” as we had bad experience with spammers in the past who tried to hijack a community meeting. >> >> You can find dial-in information on the PTGbot web page, but you will need the password in order to be able to log in to the session which we will distribute in emails to the registered attendees. >> >> __Please DO NOT SHARE THE PASSWORD on the mailing list or other public forum.__ >> >> Thanks, >> Ildikó >> >> >>> On Jun 1, 2020, at 07:04, Hu, Yong wrote: >>> >>> Hi Ildikó, >>> We haven't seen the dedicated zoom bridge sent for the vPTG. >>> Are you going to use this normal StarlingX Zoom bridge (Zoom Link: https://zoom.us/j/342730236) for this vPTG? >>> >>> Regards, >>> Yong >>> >>> On 2020/5/26, 9:54 PM, "Ildiko Vancsa" wrote: >>> >>> Hi StarlingX Community, >>> >>> As you may already know the virtual version of the PTG[1] takes place next week (June 1-5). >>> >>> Zoom is one of the tools we will be using next week therefore my account that we run the StarlingX meetings from will also be utilized to run the event. As only one meeting can run from an account at a time mine won’t be available for regular calls during the time of the event. >>> >>> As StarlingX is also participating in the PTG I’m hoping this will not cause too much of an inconvenience. >>> >>> Please let me know if you have any questions. >>> >>> Thanks, >>> Ildikó >>> >>> [1] https://www.openstack.org/ptg/ >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From yong.hu at intel.com Mon Jun 1 14:49:39 2020 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 1 Jun 2020 14:49:39 +0000 Subject: [Starlingx-discuss] proposals for STX.5.0 - to present in this incoming vPTG Message-ID: Hi Folks, We did a bit homework for StarlingX vPTG topics and here are 3 proposals to present during vPTG, please have a quick look and share your feedback with us: 1. Sdo_proposal.pdf: Use Intel SDO to get small nodes on-board in the context of StarlingX. – Presenter: Yi 2. Starlingx AppHub.pdf: create a project to host “Applications” like, EdgeX, K8S dashboard, Intel EB (RNI) etc., so that StarlingX users can get one-stop solution for testing or evaluation. - Presenter: Mingyuan 3. Hummingbird: a solution to get the *small node* joining StarlingX K8S cluster, by working as a kubelet. – Presenter: Mingyuan   Regards, Yong -------------- next part -------------- A non-text attachment was scrubbed... Name: sdo_proposal.pdf Type: application/pdf Size: 356295 bytes Desc: sdo_proposal.pdf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: StarlingX AppHub.pdf Type: application/pdf Size: 141947 bytes Desc: StarlingX AppHub.pdf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Hummingbird - StarlingX small node management.pdf Type: application/pdf Size: 444879 bytes Desc: Hummingbird - StarlingX small node management.pdf URL: From allison at openstack.org Mon Jun 1 19:55:45 2020 From: allison at openstack.org (Allison Price) Date: Mon, 1 Jun 2020 14:55:45 -0500 Subject: [Starlingx-discuss] StarlingX Press Release Draft Message-ID: <9D8A18D4-1761-4860-89AA-7B25D2DD113F@openstack.org> Hi everyone, I hope you’re having a great week at the PTG! Below is a link to the press release draft for the potential StarlingX confirmation on June 11 with the OSF Board of Directors. If your organization is contributing to StarlingX and would like to provide a quote for the press release or have any feedback, please reach out to me directly. Thanks, Allison https://docs.google.com/document/d/1VhVUNuBJZ6NuEGix_L5PIcOiYV2RM_K4KAIXjDa_W1M/edit?usp=sharing From nicolae.jascanu at intel.com Mon Jun 1 20:21:32 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Mon, 1 Jun 2020 20:21:32 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200530T013359Z Message-ID: Sanity Test from 2020-May-30 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200530T013359Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200530T013359Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test on Virtual Environment was NOT executed because the setup was used for debugging and regression testing Regards, STX Validation Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Jun 1 21:07:04 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 1 Jun 2020 23:07:04 +0200 Subject: [Starlingx-discuss] StarlingX PTG session starts in less than an hour Message-ID: <546392F7-8881-4F2B-9C52-43D53E7AE421@gmail.com> Hi, It is a friendly reminder that the StarlingX session at the virtual PTG event starts in less than an hour. If you already registered for the event you should have received information in email with details about how to join. If you are interested in attending but haven’t registered yet please do so here: https://virtualptgjune2020.eventbrite.com Once you registered you will receive all the necessary information about participating in the event. For agenda please see the following etherpad: https://etherpad.opendev.org/p/stx-virtual-PTG-June I would also like to remind you to the joint session with the Kata Containers community tomorrow at 1400 UTC. See you in a bit! Thanks, Ildikó From tyler.smith at windriver.com Mon Jun 1 21:40:23 2020 From: tyler.smith at windriver.com (Smith, Tyler) Date: Mon, 1 Jun 2020 21:40:23 +0000 Subject: [Starlingx-discuss] Fault Containerization: Enable FM panels in Openstack Dashboard In-Reply-To: <08A07A3B6772DE42BB77D7AE70889B8A968E95E3@BGSMSX103.gar.corp.intel.com> References: <08A07A3B6772DE42BB77D7AE70889B8A8F09359C@BGSMSX101.gar.corp.intel.com> <08A07A3B6772DE42BB77D7AE70889B8A968E95E3@BGSMSX103.gar.corp.intel.com> Message-ID: Responses inline Thanks, Tyler From: Das, Ambarish [mailto:ambarish.das at intel.com] Sent: Friday, May 29, 2020 7:39 AM To: Smith, Tyler ; Penney, Don ; Mukherjee, Sanjay K Cc: Wold, Saul ; Jones, Bruce E ; Bhat, Gopalkrishna ; starlingx-discuss at lists.starlingx.io; Sun, Austin ; Eslimi, Dariush Subject: RE: Fault Containerization: Enable FM panels in Openstack Dashboard Hi Tyler, Thanks for explaining the details and we have few queries inline Thanks & regards, Ambarish/Sanjay From: Smith, Tyler > Sent: Friday, May 15, 2020 1:35 AM To: Das, Ambarish >; Penney, Don >; Mukherjee, Sanjay K > Cc: Wold, Saul >; Jones, Bruce E >; Bhat, Gopalkrishna >; starlingx-discuss at lists.starlingx.io; Sun, Austin >; Eslimi, Dariush > Subject: RE: Fault Containerization: Enable FM panels in Openstack Dashboard Hi Ambarish & Sanjay There were two approaches that were being looked at. The first was to use the same GUI plugin for both the platform horizon and containerized horizon, but only copy over the horizon 'enabled' files corresponding to the panels that we want to enable (fault panels in the containerized case). This is the approach that was tried but it ended up not working and required lots of hacks during the docker image build step, such as modifying the code, which we really want to avoid. The reasons it wasn't working weren't really clear to me, I didn't spend time debugging etc. Attached is some background on what was being discussed then. [AD/SM]: We are clear with this approach and I believe the abandoned patch has the required hack for this implementation (https://review.opendev.org/#/c/661423/4). We are able to reproduce this step with docker image build for stx-horizon and FM Panel is visible in openstack dashboard. Please let us know if anything wrong in this understanding/reproduction steps. [TS] Yes, the abandoned patch was working, but need to find a way to do it without those kinds of hacks The decision was made to instead split our plugin into two, one for the platform panels, and one for just the fault panels. This will involve creating a new package next to starlingx-dashboard (in the same repo though) that has a similar structure but only has the relevant fault components. Including: Api/fm.py Api/rest/fm.py Dashboards/admin/active_alarms/ Static/dashboard/fault_management/ Enabled/ -> need the fm related enabled files in here, along with the banner view header section definition (see ADD_HEADER_SECTIONS). These files will get copied over in the docker image build step. The only other instruction in this step should be the csrftoken customization command from the attached email, which I think unfortunately is required. [AD/SM]: As per our understanding all these changes will be part of stx-gui module. Need more information regarding stx-gui component to understand better. Please let us know if any documentation link there to refer for this module ( It would be really helpful if we can approach a POC/module expert for this). Also was there any patch created with these changes earlier? [TS] Yes, the changes will be to stx-gui, there's no specific documentation on that module, but as it is a horizon plugin it will roughly follow the structure and features in the openstack plugin documentation mentioned below. If you have specific questions feel free to ask me. There has been no prior attempt at this approach As for the settings for the containerized horizon, they are stored in the openstack helm application manifest here: openstack-armada-app/stx-openstack-helm/stx-openstack-helm/manifests/manifest.yaml My understanding is fault management will remain in the platform as well. A distributed cloud deployment will also have to be tested, as the dc_admin dashboard also queries fm. There's decent documentation on the plugin structure upstream: https://docs.openstack.org/horizon/latest/contributor/tutorials/plugin.html Let me know if you need more details Tyler From: Das, Ambarish [mailto:ambarish.das at intel.com] Sent: Wednesday, May 13, 2020 2:22 AM To: Penney, Don >; Smith, Tyler > Cc: Wold, Saul >; Jones, Bruce E >; Bhat, Gopalkrishna >; starlingx-discuss at lists.starlingx.io; Mukherjee, Sanjay K >; Sun, Austin > Subject: Fault Containerization: Enable FM panels in Openstack Dashboard Hello Tyler & Don, We have started looking into the remaining work in Fault Containerization and looked into the earlier abandoned patch implementation (https://review.opendev.org/#/c/661423/). As we have joined the team newly, we would like to understand GUI and Horizon implementation and next steps to move forward regarding this pending activity. We had a initial discussion regarding this with Saul and Austin and based on their inputs, we would like to have a discussion. Please let me know if you need any clarification. Thanks & regards, Ambarish/Sanjay -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Jun 1 23:31:10 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 1 Jun 2020 19:31:10 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 130 - Failure! Message-ID: <1819066125.1573.1591054271498.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 130 Status: Failure Timestamp: 20200601T232418Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200601T232418Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From Frank.Miller at windriver.com Mon Jun 1 23:43:14 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 1 Jun 2020 23:43:14 +0000 Subject: [Starlingx-discuss] Sanity TC list (was RE: [OpenStack Ussuri Upgrade Task] Call for patch review!!) Message-ID: Nicolae: Thanks for sending the TC list for sanity. The reason sanity is not seeing the stx-openstack recovery issues after a controller reboot is that TC is not currently in the sanity suite. In the 02-Host-Management testcases I see lock/unlock TCs but not TCs where each controller is rebooted and checked to make sure all the apps and pods recover after the reboot. I suggest you plan to add in this type of testcase into sanity. Frank -----Original Message----- From: Jascanu, Nicolae Sent: Wednesday, May 27, 2020 11:33 AM To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Please find below the list of sanity testcases executed: ######################### Sanity-Openstack ######################### ############# 01-Instance-From-Image.robot ########################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros and Centos instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros and Centos instances. Create Networks For Instances [Documentation] Create networks to be used to launch Cirros and Centos ... instances. Launch Instances [Documentation] Launch Cirros and Centos instances. Suspend Resume Instances [Documentation] Suspend and Resume Cirros and Centos instances. Set Error Active Flags Instances [Documentation] Set 'Error' and 'Active' flags to Cirros and Centos ... instances. Pause Unpause Instances [Documentation] Pause and Unpause Cirros and Centos instances. Stop Start Instances [Documentation] Stop and Start Cirros and Centos instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros and Centos instances. Reboot Instances [Documentation] Reboot Cirros and Centos instances. Rebuild Instances [Documentation] Rebuild Cirros and Centos instances. Resize Instances [Documentation] Resize Cirros instance. Create Flavor ${cirros_flavor_ram} ${cirros_flavor_vcpus} ... ${cirros_flavor_disk} ${cirros_flavor_name_2} Set Unset Properties Instances [Documentation] Set Unset properties of Cirros and Centos instances. Evacuate Instances From Hosts [Documentation] Evacuate all Cirros and Centos instances from computes ... or controllers. ############### 02-Instance-From-Volume.robot ###################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks For Instance [Documentation] Create networks to be used to launch Cirros ... instances. Create Volume For Instances [Documentation] Create volumes with or without properties to be used to ... to launch Cirros instances. Launch Instances [Documentation] Launch Cirros instances. Suspend Resume Instance [Documentation] Suspend and Resume Cirros instances. Set Error Active Flags Instance [Documentation] Set 'Error' and 'Active' flags to Cirros ... instance. Pause Unpause Instances [Documentation] Pause and Unpause Cirros instances. Stop Start Instances [Documentation] Stop and Start Cirros instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros instances. Reboot Instances [Documentation] Reboot Cirros instances. Rebuild Instances [Documentation] Rebuild Cirros instances. Resize Instances [Documentation] Resize Cirros instances. Set Unset Properties Instances [Documentation] Set Unset properties of Cirros instances. Evacuate Instances From Hosts [Documentation] Evacuate all Cirros instances from computes ... or controllers. ############### 03-Instance-From-Snapshot.robot ###################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks For Instance [Documentation] Create networks to be used to launch Cirros and Centos ... instances. Create Volume For Instances [Documentation] Create volumes with or without properties to be used ... to launch Cirros instances. Create Snapshot For Instance [Documentation] Create snapshots with or without properties to be used ... to launch Cirros instances. Launch Instances [Documentation] Launch Cirros instances from snapshot. Suspend Resume Instances [Documentation] Suspend and Resume Cirros instances. Set Error Active Flags Instances [Documentation] Set 'Error' and 'Active' flags to Cirros instances. Pause Unpause Instances [Documentation] Pause and Unpause Cirros instances. Stop Start Instances [Documentation] Stop and Start Cirros instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros instances. Reboot Instances [Documentation] Reboot Cirros instances. Rebuild Instances [Documentation] Rebuild Cirros instances. Resize Instances [Documentation] Resize Cirros instances. Set Unset Properties Instances [Documentation] Set Unset properties of Cirros instances. Evacuate Instances From Hosts [Documentation] Evacuate all instances from computes or ... controllers. ############### 04-Instance-From-Heat-Template.robot ######################## Create Flavors for Instance [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images for Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks for Instance [Documentation] Create networks to be used to launch Cirros ... instances. Create Instance Trough Stack [Documentation] Create a Cirros instance using a heat template ############### 05-Measurements-For-Metric.robot ################# Create Image For Metrics [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Update Image Name [Documentation] Update image name. Update Image Disk Ram Size [Documentation] Update image disk size and ram size. ########################### Sanity-Platform ########################### ############# 01-OpenStack-Pod-Healthy.robot ######################## OpenStack PODs Healthy [Documentation] Check all OpenStack pods are healthy, in Running or ... Completed state. Reapply STX OpenStack [Documentation] Re apply stx openstack application without any ... modification to helm charts. STX OpenStack Override Update Reset [Documentation] Helm override for OpenStack nova chart and reset. Kube System Services [Documentation] Check pods status and kube-system services are ... displayed. Create Check Delete POD [Documentation] Launch a POD via kubectl. ################ 02-Host-Management.robot ######################## Add Controller Host Simplex [Documentation] Try to add a new controller on a Simplex ... configuration, expect to fail. Swact Controller Host Simplex [Documentation] Try to perform a swact controller on a Simplex ... configuration, expect to fail. Lock Active Controller [Documentation] Try to perform a lock to the Active controller Lock Unlock Standby Controller [Documentation] Perform a lock/unlock to the Standby controller Lock Unlock Compute Host [Documentation] Perform a lock/unlock to the compute node Lock Unlock Storage Host [Documentation] Perform a lock/unlock to the storage node Regards, Nicolae Jascanu, Ph.D. TSD Software Engineer Internet Of Things Group Galati, Romania -----Original Message----- From: Miller, Frank Sent: Wednesday, May 27, 2020 17:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Jun 2 05:37:20 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 2 Jun 2020 01:37:20 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 131 - Still Failing! In-Reply-To: <54924716.1571.1591054269707.JavaMail.javamailuser@localhost> References: <54924716.1571.1591054269707.JavaMail.javamailuser@localhost> Message-ID: <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 131 Status: Still Failing Timestamp: 20200602T053143Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200602T053143Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From nicolae.jascanu at intel.com Tue Jun 2 07:44:45 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Tue, 2 Jun 2020 07:44:45 +0000 Subject: [Starlingx-discuss] No new layered builds Message-ID: Hi, Since Saturday, May 30 there are no new builds. The last report was sent for build: 20200530T013359Z Regards, Nicolae Jascanu, Ph.D. TSD Software Engineer [intel-logo] Internet Of Things Group Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3923 bytes Desc: image001.png URL: From shuicheng.lin at intel.com Tue Jun 2 08:25:57 2020 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 2 Jun 2020 08:25:57 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 131 - Still Failing! In-Reply-To: <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> References: <54924716.1571.1591054269707.JavaMail.javamailuser@localhost> <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, I try to reproduce the mirror issue in my local environment. It seems it is due to lack of repodata of " http://mirror.starlingx.cengn.ca/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/". If I switch to use original ceph repo which contains the repodata folder, I could download rpms successfully. But my local error message is not the same as CENGN's. This debug data is just for you reference. [slin14 at 0ca513348895 yum.repos.d]$ sudo -E yumdownloader -q -c /tmp/stx_mirror_BBoGyH/yum.conf --releasever=7 --exclude='*.i686' --archlist=noarch,x86_64 --url rh-python36-runtime-2.0-1.el7 http://mirror.starlingx.cengn.ca/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article https://access.redhat.com/articles/1320623 If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ failure: repodata/repomd.xml from ceph-ussuri: [Errno 256] No more mirrors to try. http://mirror.starlingx.cengn.ca/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found [slin14 at 0ca513348895 yum.repos.d]$ echo $? 1 Best Regards Shuicheng -----Original Message----- From: build.starlingx at gmail.com Sent: Tuesday, June 2, 2020 1:37 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 131 - Still Failing! Project: STX_build_layer_flock_master_master Build #: 131 Status: Still Failing Timestamp: 20200602T053143Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200602T053143Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Jun 2 08:48:11 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 2 Jun 2020 08:48:11 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From nicolae.jascanu at intel.com Tue Jun 2 09:29:24 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Tue, 2 Jun 2020 09:29:24 +0000 Subject: [Starlingx-discuss] Sanity TC list (was RE: [OpenStack Ussuri Upgrade Task] Call for patch review!!) In-Reply-To: References: Message-ID: Hi Frank, We will need to allocate some bandwidth to create a sanity test for this LP. Meanwhile we are following with Zhipeng to understand exactly the steps and timings we need to check Regards, Nicolae Jascanu, Ph.D. TSD Software Engineer Internet Of Things Group Galati, Romania -----Original Message----- From: Miller, Frank Sent: Tuesday, June 2, 2020 02:43 To: Jascanu, Nicolae ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: Sanity TC list (was RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!) Nicolae: Thanks for sending the TC list for sanity. The reason sanity is not seeing the stx-openstack recovery issues after a controller reboot is that TC is not currently in the sanity suite. In the 02-Host-Management testcases I see lock/unlock TCs but not TCs where each controller is rebooted and checked to make sure all the apps and pods recover after the reboot. I suggest you plan to add in this type of testcase into sanity. Frank -----Original Message----- From: Jascanu, Nicolae Sent: Wednesday, May 27, 2020 11:33 AM To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Please find below the list of sanity testcases executed: ######################### Sanity-Openstack ######################### ############# 01-Instance-From-Image.robot ########################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros and Centos instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros and Centos instances. Create Networks For Instances [Documentation] Create networks to be used to launch Cirros and Centos ... instances. Launch Instances [Documentation] Launch Cirros and Centos instances. Suspend Resume Instances [Documentation] Suspend and Resume Cirros and Centos instances. Set Error Active Flags Instances [Documentation] Set 'Error' and 'Active' flags to Cirros and Centos ... instances. Pause Unpause Instances [Documentation] Pause and Unpause Cirros and Centos instances. Stop Start Instances [Documentation] Stop and Start Cirros and Centos instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros and Centos instances. Reboot Instances [Documentation] Reboot Cirros and Centos instances. Rebuild Instances [Documentation] Rebuild Cirros and Centos instances. Resize Instances [Documentation] Resize Cirros instance. Create Flavor ${cirros_flavor_ram} ${cirros_flavor_vcpus} ... ${cirros_flavor_disk} ${cirros_flavor_name_2} Set Unset Properties Instances [Documentation] Set Unset properties of Cirros and Centos instances. Evacuate Instances From Hosts [Documentation] Evacuate all Cirros and Centos instances from computes ... or controllers. ############### 02-Instance-From-Volume.robot ###################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks For Instance [Documentation] Create networks to be used to launch Cirros ... instances. Create Volume For Instances [Documentation] Create volumes with or without properties to be used to ... to launch Cirros instances. Launch Instances [Documentation] Launch Cirros instances. Suspend Resume Instance [Documentation] Suspend and Resume Cirros instances. Set Error Active Flags Instance [Documentation] Set 'Error' and 'Active' flags to Cirros ... instance. Pause Unpause Instances [Documentation] Pause and Unpause Cirros instances. Stop Start Instances [Documentation] Stop and Start Cirros instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros instances. Reboot Instances [Documentation] Reboot Cirros instances. Rebuild Instances [Documentation] Rebuild Cirros instances. Resize Instances [Documentation] Resize Cirros instances. Set Unset Properties Instances [Documentation] Set Unset properties of Cirros instances. Evacuate Instances From Hosts [Documentation] Evacuate all Cirros instances from computes ... or controllers. ############### 03-Instance-From-Snapshot.robot ###################### Create Flavors For Instances [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images For Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks For Instance [Documentation] Create networks to be used to launch Cirros and Centos ... instances. Create Volume For Instances [Documentation] Create volumes with or without properties to be used ... to launch Cirros instances. Create Snapshot For Instance [Documentation] Create snapshots with or without properties to be used ... to launch Cirros instances. Launch Instances [Documentation] Launch Cirros instances from snapshot. Suspend Resume Instances [Documentation] Suspend and Resume Cirros instances. Set Error Active Flags Instances [Documentation] Set 'Error' and 'Active' flags to Cirros instances. Pause Unpause Instances [Documentation] Pause and Unpause Cirros instances. Stop Start Instances [Documentation] Stop and Start Cirros instances. Lock Unlock Instances [Documentation] Lock and Unlock Cirros instances. Reboot Instances [Documentation] Reboot Cirros instances. Rebuild Instances [Documentation] Rebuild Cirros instances. Resize Instances [Documentation] Resize Cirros instances. Set Unset Properties Instances [Documentation] Set Unset properties of Cirros instances. Evacuate Instances From Hosts [Documentation] Evacuate all instances from computes or ... controllers. ############### 04-Instance-From-Heat-Template.robot ######################## Create Flavors for Instance [Documentation] Create flavors with or without properties to be used ... to launch Cirros instances. Create Images for Instances [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Create Networks for Instance [Documentation] Create networks to be used to launch Cirros ... instances. Create Instance Trough Stack [Documentation] Create a Cirros instance using a heat template ############### 05-Measurements-For-Metric.robot ################# Create Image For Metrics [Documentation] Create images with or without properties to be used ... to launch Cirros instances. Update Image Name [Documentation] Update image name. Update Image Disk Ram Size [Documentation] Update image disk size and ram size. ########################### Sanity-Platform ########################### ############# 01-OpenStack-Pod-Healthy.robot ######################## OpenStack PODs Healthy [Documentation] Check all OpenStack pods are healthy, in Running or ... Completed state. Reapply STX OpenStack [Documentation] Re apply stx openstack application without any ... modification to helm charts. STX OpenStack Override Update Reset [Documentation] Helm override for OpenStack nova chart and reset. Kube System Services [Documentation] Check pods status and kube-system services are ... displayed. Create Check Delete POD [Documentation] Launch a POD via kubectl. ################ 02-Host-Management.robot ######################## Add Controller Host Simplex [Documentation] Try to add a new controller on a Simplex ... configuration, expect to fail. Swact Controller Host Simplex [Documentation] Try to perform a swact controller on a Simplex ... configuration, expect to fail. Lock Active Controller [Documentation] Try to perform a lock to the Active controller Lock Unlock Standby Controller [Documentation] Perform a lock/unlock to the Standby controller Lock Unlock Compute Host [Documentation] Perform a lock/unlock to the compute node Lock Unlock Storage Host [Documentation] Perform a lock/unlock to the storage node Regards, Nicolae Jascanu, Ph.D. TSD Software Engineer Internet Of Things Group Galati, Romania -----Original Message----- From: Miller, Frank Sent: Wednesday, May 27, 2020 17:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Dariush.Eslimi at windriver.com Tue Jun 2 13:21:00 2020 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Tue, 2 Jun 2020 13:21:00 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Config/DC/Flock/Upgrade Bi-weekly Meeting Message-ID: Cancelling due to PTG. All, This will not be a status meeting, please bring your questions or bring issues that requires discussions that would help you make decisions. Thanks, Dariush Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC (every 2 weeks) Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Meeting notes are at https://etherpad.openstack.org/p/stx-config_DC_flock Subproject wikis: https://wiki.openstack.org/wiki/StarlingX/Config https://wiki.openstack.org/wiki/StarlingX/DistCloud https://wiki.openstack.org/wiki/StarlingX/FlockServices -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2465 bytes Desc: not available URL: From Dan.Voiculeasa at windriver.com Tue Jun 2 13:22:59 2020 From: Dan.Voiculeasa at windriver.com (Voiculeasa, Dan) Date: Tue, 2 Jun 2020 13:22:59 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: Message-ID: Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory\ncp: cannot stat ‘>’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory", "cp: cannot stat ‘>’: No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Tue Jun 2 13:36:57 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 2 Jun 2020 13:36:57 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: Message-ID: Great thanks Voiculeasa. I already setup backup and restore, simplex. One question, for restore, currently only platform restore is enabled, correct? What restore for openstack? Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Jun 2 14:06:21 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 2 Jun 2020 14:06:21 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: Message-ID: Martin: B&R only works for platform at the moment. For openstack there are outstanding commits that have not merged. I suggest that you just focus on getting B&R for the platform to work. Frank From: Chen, Haochuan Z Sent: Tuesday, June 02, 2020 9:37 AM To: Voiculeasa, Dan ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] issue for backup and restore Great thanks Voiculeasa. I already setup backup and restore, simplex. One question, for restore, currently only platform restore is enabled, correct? What restore for openstack? Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Jun 2 15:20:50 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 2 Jun 2020 15:20:50 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Zhipeng, thank you. Based on the data below, this isn't one problem - it's multiple opportunities for performance optimizations across a number of components. Why is the host restart taking 3-4m ? Can we improve that? Etc.... Nothing here should be a gate for checking in the Ussuri code. My only question would be - do we consider the performance issues documented below to be release gating? brucej -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 2, 2020 1:48 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Tue Jun 2 15:47:00 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 2 Jun 2020 15:47:00 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Tue Jun 2 16:25:28 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 2 Jun 2020 16:25:28 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yang.liu at windriver.com Tue Jun 2 02:25:28 2020 From: yang.liu at windriver.com (Liu, Yang (YOW)) Date: Tue, 2 Jun 2020 02:25:28 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Test meeting Message-ID: Canceled for this week due to PTG. Weekly meeting on Tuesday 8AM PT / 1500 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 5239 bytes Desc: not available URL: From amy at demarco.com Tue Jun 2 18:25:37 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 2 Jun 2020 13:25:37 -0500 Subject: [Starlingx-discuss] [diversity] Hour of Healing Message-ID: The OSF Diversity and Inclusion Working Group recognizes that this is a trying time for our communities and colleagues. We would like to invite you to 'An Hour of Healing' on Thursday (June 4th at 17:30 - 18:30 UTC) where you can talk to others in a safe place. We invite you to use this time to express your feelings, or to just be able to talk to others without being judged. This session will adhere to the OSF Code of Conduct and zero tolerance for harassment policy, which means we will not be judging or condemning others (individuals or groups) inside OR outside of our immediate community. We will come together to heal, in mutually respectful dialogue, keeping in mind that while there are many different individual viewpoints, we all share pain collectively and can heal together. We will be using https://meetpad.opendev.org/PTGDiversityAndInclusion for this gathering. The OSF Diversity and Inclusion WG -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Wed Jun 3 00:54:31 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 00:54:31 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Wed Jun 3 01:18:29 2020 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 3 Jun 2020 01:18:29 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Release meeting Message-ID: Cancelling this week due to the PTG Weekly meeting on Thursday 11AM PT / 1900 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1857 bytes Desc: not available URL: From zhipengs.liu at intel.com Wed Jun 3 01:39:22 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 3 Jun 2020 01:39:22 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: BTW, https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash This crash could not be reproduced with daily build 20200516T080009Z! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 0:25 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Wed Jun 3 02:03:42 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 3 Jun 2020 02:03:42 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Wed Jun 3 02:30:35 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 2 Jun 2020 22:30:35 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 393 - Failure! Message-ID: <1562809381.1584.1591151436754.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 393 Status: Failure Timestamp: 20200603T022044Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200603T020359Z DOCKER_BUILD_ID: jenkins-master-flock-20200603T020359Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200603T020359Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Wed Jun 3 02:30:42 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 2 Jun 2020 22:30:42 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> Message-ID: <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From Frank.Miller at windriver.com Wed Jun 3 02:38:21 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 02:38:21 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed Jun 3 07:17:06 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 3 Jun 2020 03:17:06 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 131 - Still Failing! In-Reply-To: <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> References: <54924716.1571.1591054269707.JavaMail.javamailuser@localhost> <874706157.1579.1591076240958.JavaMail.javamailuser@localhost> Message-ID: <23d26a83-ba55-933f-8979-32e5ce8e2b8f@windriver.com> Two issues 1) There was an issue with the cengn mirroring process.  Recent *.repo changes weren't being fully mirrored.  I found the root cause and corrected it.  The recent content additions are now mirrored. 2) It appears that download_mirror.sh successfully fell back to pulling rpms from upstream sources for the monolithic build, but not for the flock build.   I don't fully understand this issue yet.  Having fixed the cengn mirror, the need for the fallback has been removed, so it's harder to reproduce. On 2020-06-02 1:37 a.m., build.starlingx at gmail.com wrote: > Project: STX_build_layer_flock_master_master > Build #: 131 > Status: Still Failing > Timestamp: 20200602T053143Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200602T053143Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jun 3 07:56:47 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 3 Jun 2020 03:56:47 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Message-ID: This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version.  Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build.  It doesn't build it itself.  The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm.  It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present.  It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this.  I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2.  If yes, do we still need whatever modifications were applied to librados2-13.2.2?  Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst?  If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_layer_flock_master_master > Build #: 132 > Status: Still Failing > Timestamp: 20200603T020359Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Wed Jun 3 08:47:37 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 3 Jun 2020 08:47:37 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Message-ID: Hi Scott, For question #1, When we built openstack ussuri image which is python3 only. It needs python3-rbd and related dependency, so we add librados2-13.2.10 and related packages. For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. Shouldn’t we let the build choose local build first? Another option is moving these packages to container layer, add rpms_centos.lst in config/centos/flock/? Thanks! Zhipeng From: Scott Little Sent: 2020年6月3日 15:57 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version. Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build. It doesn't build it itself. The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm. It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm. It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this. I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. If yes, do we still need whatever modifications were applied to librados2-13.2.2? Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst? If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Jun 3 13:07:47 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 3 Jun 2020 06:07:47 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Message-ID: <40d7fc31-e1f6-a416-3815-82f90df44c18@linux.intel.com> On 6/3/20 12:56 AM, Scott Little wrote: > This was an interesting one. > Yes, indeed, great investigative work! > We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part > of the distro layer for some time. > > A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of > the flock layer. > It looks like that commit actually added both librados2-13.2.10 and 13.2.2! My bad for not catching that. I was not aware that librados2 was being build as part of Ceph, I guess this is something we should be generally aware of. That change also brought in a load of Ceph related packages (ceph-common, libcephfs2, ...), so there might be additional collisions that we don't know about yet! > Now build-iso preferres locally built packages over downloaded ones, > even if the downloaded on is of higher version.  Now that policy is open > for debate, but that is what it does. > > Monolithic build uses the lst files of all layers, but having built > librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects > librados2-13.2.2-0.el7.tis.25.x86_64.rpm over > librados2-13.2.10-0.el7.x86_64.rpm when building the iso. > > Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm > from the distro layer build.  It doesn't build it itself.  The downloads > from the two sources are lumped into a common repo, so it has no reason > to prefer the lower versioned rpm.  It selects > librados2-13.2.10-0.el7.x86_64.rpm. > Good research! This makes sense (I guess initially) > The final piece of the puzzle is the transitive list of requires for > librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that pulls > in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs > userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present.  It's > wasn't included in the recent lst file changes that added > librados2-13.2.10-0.el7.x86_64.rpm. > We do have userspace-rcu in distro, and lttng-ust is only part of the flock. It seems we have userspace-rcu-devel only in flock. So yeah this seems to be some problem here. > A flock layer build-iso should have caught this.  I suspect build-iso > was only performed on a monolithic build. > I know we probably don't have time, but it would be interesting to verify why the monolithic build not catch this and if the flock layer would actually catch it. > Open questions. > 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. > If yes, do we still need whatever modifications were applied to > librados2-13.2.2?  Do they need to be ported to librados2-13.2.10 , or > can we drop librados2 from the set of packages we have patches against? > As I mentioned above, librados2 is build as part of Ceph, so an additional question is would Ceph-13.2.2 have issues using librados2-13.2.10? Or any of the other upgraded Ceph related packages that got updated? Do we need to up-rev Ceph and build for both python2 or python3? > 2) For build-iso... should we prefer locally built packages even though > there is a higher package named in an lst?  If yes, then layered build > needs apply the local first policy accross layers. Alternatively, > perhaps drop the local first policy, but add an audit tool to detect > when a locally built package is being masked in this way. > Is this an edge case or common? Do we know what other cases like this and maybe that informs what kind of audit tool is needed. So, adding an audit tool might have caught this. The librados2 is not actually in any list as it's build as part of Ceph, it comes in as a Requires: for Ceph. The python3 update added it to the flock/rpms_centos.lst file. Yes, I ducked the local vs higher question right now, maybe knowing the answer about Ceph's usage would help and if we have this issue elsewhere will help me. Sau! > Scott > > > On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: >> Project: STX_build_layer_flock_master_master >> Build #: 132 >> Status: Still Failing >> Timestamp: 20200603T020359Z >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs >> -------------------------------------------------------------------------------- >> Parameters >> >> FULL_BUILD: false >> FORCE_BUILD: false >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Frank.Miller at windriver.com Wed Jun 3 14:11:50 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 14:11:50 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Wed Jun 3 14:28:01 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 3 Jun 2020 14:28:01 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Wed Jun 3 14:34:35 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 14:34:35 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Jun 3 14:52:37 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 3 Jun 2020 07:52:37 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> Message-ID: <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> On 6/3/20 1:47 AM, Liu, ZhipengS wrote: > Hi Scott, > > For question #1, > > When we built openstack ussuri image which is python3 only. > > It needs python3-rbd and related dependency, so we add librados2-13.2.10 > and related packages. > > For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. > > Shouldn’t  we let the build choose local build first? > Following up on this we need to be careful about which we choose, as I said in the other email is this a one-off issue or something that we see more of. So maybe an audit tool would help. > Another option is moving these packages to container layer, add > rpms_centos.lst in config/centos/flock/? > I understand this option better after chatting with Zhipeng, I think this might be the best option adding the Updated Ceph / RBD related packages to the container list which will be used for the Usurri container builds but not by the platform OS. This would mean that the containers would have Ceph 13.2.10 related packages and the platform OS would be 13.2.2. Would that cause problems or stability issues? Sau! > Thanks! > > Zhipeng > > *From:*Scott Little > *Sent:* 2020年6月3日15:57 > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [build-report] > STX_build_layer_flock_master_master - Build # 132 - Still Failing! > > This was an interesting one. > > We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part > of the distro layer for some time. > > A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of > the flock layer. > > Now build-iso preferres locally built packages over downloaded ones, > even if the downloaded on is of higher version.  Now that policy is open > for debate, but that is what it does. > > Monolithic build uses the lst files of all layers, but having built > librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects > librados2-13.2.2-0.el7.tis.25.x86_64.rpm over > librados2-13.2.10-0.el7.x86_64.rpm when building the iso. > > Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm > from the distro layer build.  It doesn't build it itself.  The downloads > from the two sources are lumped into a common repo, so it has no reason > to prefer the lower versioned rpm.  It selects > librados2-13.2.10-0.el7.x86_64.rpm. > > The final piece of the puzzle is the transitive list of requires for > librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that pulls > in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs > userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't > included in the recent lst file changes that added > librados2-13.2.10-0.el7.x86_64.rpm. > > A flock layer build-iso should have caught this.  I suspect build-iso > was only performed on a monolithic build. > > Open questions. > 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. > If yes, do we still need whatever modifications were applied to > librados2-13.2.2?  Do they need to be ported to librados2-13.2.10 , or > can we drop librados2 from the set of packages we have patches against? > > 2) For build-iso... should we prefer locally built packages even though > there is a higher package named in an lst?  If yes, then layered build > needs apply the local first policy accross layers.  Alternatively, > perhaps drop the local first policy, but add an audit tool to detect > when a locally built package is being masked in this way. > > Scott > > On 2020-06-02 10:30 p.m., build.starlingx at gmail.com > wrote: > > Project: STX_build_layer_flock_master_master > > Build #: 132 > > Status: Still Failing > > Timestamp: 20200603T020359Z > > Check logs at: > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs > > -------------------------------------------------------------------------------- > > Parameters > > FULL_BUILD: false > > FORCE_BUILD: false > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From alfredo.deluca at gmail.com Wed Jun 3 19:05:03 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Wed, 3 Jun 2020 21:05:03 +0200 Subject: [Starlingx-discuss] Subcloud on a Virtual Machine Message-ID: Hi all. For testing purposes we are trying to install a subcloud on a VM (Openstack to be precise) but we get a couple of errors as below. Booting from an ISO (STX 3.0) we get this 1. ERROR: Specified installation (sda) or boot (sda) device is invalid. then I supposed the ISO is looking for a device *sda* .. so we fixed that but then another issue occurred and the error now is 2. Disk "" given in clearpart command does not exist. Now I wonder if it is possible to install that on top of a VM and also what could it the fix for the second error. Any idea/clue? Cheers -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jun 3 21:01:51 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 3 Jun 2020 17:01:51 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> Message-ID: <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> No I don't think that would work.  We can't have two versions of the same package competing for dominance within the mock build environments.  i.e. on time pkg X builds against 13.2.2, the next time against 13.2.10.  The outcome dependent on the vagaries of job scheduling, build speeds, and any other number of factors.  If you compile against 13.2.10, will you run ok vs 13.2.2.  I wouldn't want to bet on it. The build layering solution might be to throw it in it's own layer. Until we are 100% committed to build layering, we need to converge on ONE version of ceph. Scott On 2020-06-03 10:52 a.m., Saul Wold wrote: > > > On 6/3/20 1:47 AM, Liu, ZhipengS wrote: >> Hi Scott, >> >> For question #1, >> >> When we built openstack ussuri image which is python3 only. >> >> It needs python3-rbd and related dependency, so we add >> librados2-13.2.10 and related packages. >> >> For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for >> python2. >> >> Shouldn’t  we let the build choose local build first? >> > Following up on this we need to be careful about which we choose, as I > said in the other email is this a one-off issue or something that we > see more of.  So maybe an audit tool would help. > >> Another option is moving these packages to container layer, add >> rpms_centos.lst in config/centos/flock/? >> > I understand this option better after chatting with Zhipeng, I think > this might be the best option adding the Updated Ceph / RBD related > packages to the container list which will be used for the Usurri > container builds but not by the platform OS. > > This would mean that the containers would have Ceph 13.2.10 related > packages and the platform OS would be 13.2.2.  Would that cause > problems or stability issues? > > Sau! > >> Thanks! >> >> Zhipeng >> >> *From:*Scott Little >> *Sent:* 2020年6月3日15:57 >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] [build-report] >> STX_build_layer_flock_master_master - Build # 132 - Still Failing! >> >> This was an interesting one. >> >> We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as >> part of the distro layer for some time. >> >> A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst >> of the flock layer. >> >> Now build-iso preferres locally built packages over downloaded ones, >> even if the downloaded on is of higher version.  Now that policy is >> open for debate, but that is what it does. >> >> Monolithic build uses the lst files of all layers, but having built >> librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects >> librados2-13.2.2-0.el7.tis.25.x86_64.rpm over >> librados2-13.2.10-0.el7.x86_64.rpm when building the iso. >> >> Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm >> from the distro layer build.  It doesn't build it itself.  The >> downloads from the two sources are lumped into a common repo, so it >> has no reason to prefer the lower versioned rpm.  It selects >> librados2-13.2.10-0.el7.x86_64.rpm. >> >> The final piece of the puzzle is the transitive list of requires for >> librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that >> pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs >> userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's >> wasn't included in the recent lst file changes that added >> librados2-13.2.10-0.el7.x86_64.rpm. >> >> A flock layer build-iso should have caught this.  I suspect build-iso >> was only performed on a monolithic build. >> >> Open questions. >> 1) Is there a need to move to librados2-13.2.10 from >> librados2-13.2.2.  If yes, do we still need whatever modifications >> were applied to librados2-13.2.2?  Do they need to be ported to >> librados2-13.2.10 , or can we drop librados2 from the set of packages >> we have patches against? >> >> 2) For build-iso... should we prefer locally built packages even >> though there is a higher package named in an lst?  If yes, then >> layered build needs apply the local first policy accross layers.  >> Alternatively, perhaps drop the local first policy, but add an audit >> tool to detect when a locally built package is being masked in this way. >> >> Scott >> >> On 2020-06-02 10:30 p.m., build.starlingx at gmail.com >> wrote: >> >>     Project: STX_build_layer_flock_master_master >> >>     Build #: 132 >> >>     Status: Still Failing >> >>     Timestamp: 20200603T020359Z >> >>     Check logs at: >> >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs >> >> -------------------------------------------------------------------------------- >> >>     Parameters >> >>     FULL_BUILD: false >> >>     FORCE_BUILD: false >> >> >> >>     _______________________________________________ >> >>     Starlingx-discuss mailing list >> >>     Starlingx-discuss at lists.starlingx.io >> >> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Wed Jun 3 21:54:22 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Jun 2020 21:54:22 +0000 Subject: [Starlingx-discuss] Weekly build meeting is cancelled due to PTG this week Message-ID: FYI - we will not be meeting at our usual Thursday meeting time for the build project. Frank PL for StarlingX Build project -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Jun 3 22:08:29 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 3 Jun 2020 15:08:29 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: On 6/3/20 2:01 PM, Scott Little wrote: > No I don't think that would work.  We can't have two versions of the > same package competing for dominance within the mock build > environments.  i.e. on time pkg X builds against 13.2.2, the next time > against 13.2.10.  The outcome dependent on the vagaries of job > scheduling, build speeds, and any other number of factors.  If you > compile against 13.2.10, will you run ok vs 13.2.2.  I wouldn't want to > bet on it. > > The build layering solution might be to throw it in it's own layer. > > Until we are 100% committed to build layering, we need to converge on > ONE version of ceph. > Ok, so one option is to move to Ceph 13.2.10 or drop the existing package list update that brings in the python3 and related Ceph packages. Do we need to at least revert that commit in-order to get the build working again? We might need to spend a few minutes to hash this out tomorrow morning at the PTG. Sau! > Scott > > > On 2020-06-03 10:52 a.m., Saul Wold wrote: >> >> >> On 6/3/20 1:47 AM, Liu, ZhipengS wrote: >>> Hi Scott, >>> >>> For question #1, >>> >>> When we built openstack ussuri image which is python3 only. >>> >>> It needs python3-rbd and related dependency, so we add >>> librados2-13.2.10 and related packages. >>> >>> For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for >>> python2. >>> >>> Shouldn’t  we let the build choose local build first? >>> >> Following up on this we need to be careful about which we choose, as I >> said in the other email is this a one-off issue or something that we >> see more of.  So maybe an audit tool would help. >> >>> Another option is moving these packages to container layer, add >>> rpms_centos.lst in config/centos/flock/? >>> >> I understand this option better after chatting with Zhipeng, I think >> this might be the best option adding the Updated Ceph / RBD related >> packages to the container list which will be used for the Usurri >> container builds but not by the platform OS. >> >> This would mean that the containers would have Ceph 13.2.10 related >> packages and the platform OS would be 13.2.2.  Would that cause >> problems or stability issues? >> >> Sau! >> >>> Thanks! >>> >>> Zhipeng >>> >>> *From:*Scott Little >>> *Sent:* 2020年6月3日15:57 >>> *To:* starlingx-discuss at lists.starlingx.io >>> *Subject:* Re: [Starlingx-discuss] [build-report] >>> STX_build_layer_flock_master_master - Build # 132 - Still Failing! >>> >>> This was an interesting one. >>> >>> We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as >>> part of the distro layer for some time. >>> >>> A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst >>> of the flock layer. >>> >>> Now build-iso preferres locally built packages over downloaded ones, >>> even if the downloaded on is of higher version.  Now that policy is >>> open for debate, but that is what it does. >>> >>> Monolithic build uses the lst files of all layers, but having built >>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects >>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm over >>> librados2-13.2.10-0.el7.x86_64.rpm when building the iso. >>> >>> Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm >>> from the distro layer build.  It doesn't build it itself.  The >>> downloads from the two sources are lumped into a common repo, so it >>> has no reason to prefer the lower versioned rpm.  It selects >>> librados2-13.2.10-0.el7.x86_64.rpm. >>> >>> The final piece of the puzzle is the transitive list of requires for >>> librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency that >>> pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs >>> userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's >>> wasn't included in the recent lst file changes that added >>> librados2-13.2.10-0.el7.x86_64.rpm. >>> >>> A flock layer build-iso should have caught this.  I suspect build-iso >>> was only performed on a monolithic build. >>> >>> Open questions. >>> 1) Is there a need to move to librados2-13.2.10 from >>> librados2-13.2.2.  If yes, do we still need whatever modifications >>> were applied to librados2-13.2.2?  Do they need to be ported to >>> librados2-13.2.10 , or can we drop librados2 from the set of packages >>> we have patches against? >>> >>> 2) For build-iso... should we prefer locally built packages even >>> though there is a higher package named in an lst?  If yes, then >>> layered build needs apply the local first policy accross layers. >>> Alternatively, perhaps drop the local first policy, but add an audit >>> tool to detect when a locally built package is being masked in this way. >>> >>> Scott >>> >>> On 2020-06-02 10:30 p.m., build.starlingx at gmail.com >>> wrote: >>> >>>     Project: STX_build_layer_flock_master_master >>> >>>     Build #: 132 >>> >>>     Status: Still Failing >>> >>>     Timestamp: 20200603T020359Z >>> >>>     Check logs at: >>> >>> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs >>> >>> >>> -------------------------------------------------------------------------------- >>> >>> >>>     Parameters >>> >>>     FULL_BUILD: false >>> >>>     FORCE_BUILD: false >>> >>> >>> >>>     _______________________________________________ >>> >>>     Starlingx-discuss mailing list >>> >>>     Starlingx-discuss at lists.starlingx.io >>> >>> >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Thu Jun 4 02:27:07 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 3 Jun 2020 22:27:07 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 396 - Failure! Message-ID: <269577612.1591.1591237628513.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 396 Status: Failure Timestamp: 20200604T021722Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200604T020352Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200604T020352Z DOCKER_BUILD_ID: jenkins-master-flock-20200604T020352Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200604T020352Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200604T020352Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Thu Jun 4 02:27:10 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 3 Jun 2020 22:27:10 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 133 - Still Failing! In-Reply-To: <139468880.1585.1591151437284.JavaMail.javamailuser@localhost> References: <139468880.1585.1591151437284.JavaMail.javamailuser@localhost> Message-ID: <461822395.1594.1591237630931.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 133 Status: Still Failing Timestamp: 20200604T020352Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200604T020352Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From mingyuan.qi at intel.com Thu Jun 4 07:34:24 2020 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Thu, 4 Jun 2020 07:34:24 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management Message-ID: Hi, In Tuesday's PTG, I have introduced the proposal of a sub-project for small node management: Hummingbird. I put the document link here[0] for community members who are interested in the detail info but haven't joined Tuesday PTG. The target of Hummingbird project is to bring the ability of edge node(small node) management to StarlingX. The project gets the name "Hummingbird" from hummingbird's characteristics: Tiny, stably hovering and echo to "starling". Here are 3 reasons that having Hummingbird as a sub-project: 1. A bunch of technical areas such as containerization, networking, storage and flock services are converged in Hummingbird. The implementation of Hummingbird needs to be well coordinated among these technologies. 2. Hummingbird will be developed in a long term across multiple releases. The development pace of delivering features for small node management could be well discussed in the form of a sub-project. 3. Current sub-projects are organized in fundamental technologies. A sub-project based on individual functionality comes from a different perspective, for example like the projects in Openstack. It will enhance the collaboration of the members in different technology background in the community. All above aim to bring the small node management to StarlingX in a well scheduled and quality ensured way. It's a brand new proposal and I hope you could find something interesting in the doc. Welcome your questions and inputs. [0] https://drive.google.com/file/d/1VpglICCzI_PSGdCC12Y7MzhSE8cAolTM/view?usp=sharing Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Jun 4 14:19:05 2020 From: scott.little at windriver.com (Scott Little) Date: Thu, 4 Jun 2020 10:19:05 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: I see https://review.opendev.org/#/c/733426/9 has been posted With this update, layered builds should pass, and would look like this ... * Flock and iso builds will use 13.2.2. * All container builds uses 13.2.10. o Do we want 13.2.10 in ALL containers? * Any ceph dependent rpms from distro/flock builds that make it into a container (if any), will have been compiled against 13.2.2, but will run against 13.2.10.  I'm more comfortable with a increment to the patch level than a decrement.  I think we can live with this until we can move to 13.2.10 universally. Monolithic will continue to build, but will remain confused ... All lst files, including container layer lsts, are downloaded before any package is built. Most if not all packages that depend on ceph will build against 13.2.10 as mock/yum does not understand the 'prefer local'. build-iso will use 'prefer local' and ship with 13.2.2. The implications of which is unclear. One hopes that the interface is stable when the version diff is only at the patch level, but I never like to see shipped version LOWER than the complied against version. On 2020-06-03 6:08 p.m., Saul Wold wrote: > > > On 6/3/20 2:01 PM, Scott Little wrote: >> No I don't think that would work.  We can't have two versions of the >> same package competing for dominance within the mock build >> environments.  i.e. on time pkg X builds against 13.2.2, the next >> time against 13.2.10.  The outcome dependent on the vagaries of job >> scheduling, build speeds, and any other number of factors.  If you >> compile against 13.2.10, will you run ok vs 13.2.2.  I wouldn't want >> to bet on it. >> >> The build layering solution might be to throw it in it's own layer. >> >> Until we are 100% committed to build layering, we need to converge on >> ONE version of ceph. >> > Ok, so one option is to move to Ceph 13.2.10 or drop the existing > package list update that brings in the python3 and related Ceph packages. > > Do we need to at least revert that commit in-order to get the build > working again? > > We might need to spend a few minutes to hash this out tomorrow morning > at the PTG. > > Sau! > >> Scott >> >> >> On 2020-06-03 10:52 a.m., Saul Wold wrote: >>> >>> >>> On 6/3/20 1:47 AM, Liu, ZhipengS wrote: >>>> Hi Scott, >>>> >>>> For question #1, >>>> >>>> When we built openstack ussuri image which is python3 only. >>>> >>>> It needs python3-rbd and related dependency, so we add >>>> librados2-13.2.10 and related packages. >>>> >>>> For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for >>>> python2. >>>> >>>> Shouldn’t  we let the build choose local build first? >>>> >>> Following up on this we need to be careful about which we choose, as >>> I said in the other email is this a one-off issue or something that >>> we see more of.  So maybe an audit tool would help. >>> >>>> Another option is moving these packages to container layer, add >>>> rpms_centos.lst in config/centos/flock/? >>>> >>> I understand this option better after chatting with Zhipeng, I think >>> this might be the best option adding the Updated Ceph / RBD related >>> packages to the container list which will be used for the Usurri >>> container builds but not by the platform OS. >>> >>> This would mean that the containers would have Ceph 13.2.10 related >>> packages and the platform OS would be 13.2.2.  Would that cause >>> problems or stability issues? >>> >>> Sau! >>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> *From:*Scott Little >>>> *Sent:* 2020年6月3日15:57 >>>> *To:* starlingx-discuss at lists.starlingx.io >>>> *Subject:* Re: [Starlingx-discuss] [build-report] >>>> STX_build_layer_flock_master_master - Build # 132 - Still Failing! >>>> >>>> This was an interesting one. >>>> >>>> We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as >>>> part of the distro layer for some time. >>>> >>>> A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst >>>> of the flock layer. >>>> >>>> Now build-iso preferres locally built packages over downloaded >>>> ones, even if the downloaded on is of higher version.  Now that >>>> policy is open for debate, but that is what it does. >>>> >>>> Monolithic build uses the lst files of all layers, but having built >>>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects >>>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm over >>>> librados2-13.2.10-0.el7.x86_64.rpm when building the iso. >>>> >>>> Flock layer build, downloads >>>> librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer >>>> build.  It doesn't build it itself.  The downloads from the two >>>> sources are lumped into a common repo, so it has no reason to >>>> prefer the lower versioned rpm.  It selects >>>> librados2-13.2.10-0.el7.x86_64.rpm. >>>> >>>> The final piece of the puzzle is the transitive list of requires >>>> for librados2-13.2.10-0.el7.x86_64.rpm.  It has a new dependency >>>> that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn >>>> needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. >>>> It's wasn't included in the recent lst file changes that added >>>> librados2-13.2.10-0.el7.x86_64.rpm. >>>> >>>> A flock layer build-iso should have caught this.  I suspect >>>> build-iso was only performed on a monolithic build. >>>> >>>> Open questions. >>>> 1) Is there a need to move to librados2-13.2.10 from >>>> librados2-13.2.2.  If yes, do we still need whatever modifications >>>> were applied to librados2-13.2.2?  Do they need to be ported to >>>> librados2-13.2.10 , or can we drop librados2 from the set of >>>> packages we have patches against? >>>> >>>> 2) For build-iso... should we prefer locally built packages even >>>> though there is a higher package named in an lst?  If yes, then >>>> layered build needs apply the local first policy accross layers. >>>> Alternatively, perhaps drop the local first policy, but add an >>>> audit tool to detect when a locally built package is being masked >>>> in this way. >>>> >>>> Scott >>>> >>>> On 2020-06-02 10:30 p.m., build.starlingx at gmail.com >>>> wrote: >>>> >>>>     Project: STX_build_layer_flock_master_master >>>> >>>>     Build #: 132 >>>> >>>>     Status: Still Failing >>>> >>>>     Timestamp: 20200603T020359Z >>>> >>>>     Check logs at: >>>> >>>> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs >>>> >>>> >>>> -------------------------------------------------------------------------------- >>>> >>>> >>>>     Parameters >>>> >>>>     FULL_BUILD: false >>>> >>>>     FORCE_BUILD: false >>>> >>>> >>>> >>>>     _______________________________________________ >>>> >>>>     Starlingx-discuss mailing list >>>> >>>>     Starlingx-discuss at lists.starlingx.io >>>> >>>> >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Thu Jun 4 14:36:06 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 4 Jun 2020 14:36:06 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: Hi Scott, For our OpenStack upgrade case, we may have one more option that is not adding this ceph 13.2.10 repo to local build repo folder. Instead, we add this ceph repo as a parameter when we run build-stx-base.sh. Then this repo only used by OpenStack build. We will verify it tomorrow. Thanks! Zhipeng From: Scott Little Sent: 2020年6月4日 22:19 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! I see https://review.opendev.org/#/c/733426/9 has been posted With this update, layered builds should pass, and would look like this ... * Flock and iso builds will use 13.2.2. * All container builds uses 13.2.10. * Do we want 13.2.10 in ALL containers? * Any ceph dependent rpms from distro/flock builds that make it into a container (if any), will have been compiled against 13.2.2, but will run against 13.2.10. I'm more comfortable with a increment to the patch level than a decrement. I think we can live with this until we can move to 13.2.10 universally. Monolithic will continue to build, but will remain confused ... All lst files, including container layer lsts, are downloaded before any package is built. Most if not all packages that depend on ceph will build against 13.2.10 as mock/yum does not understand the 'prefer local'. build-iso will use 'prefer local' and ship with 13.2.2. The implications of which is unclear. One hopes that the interface is stable when the version diff is only at the patch level, but I never like to see shipped version LOWER than the complied against version. On 2020-06-03 6:08 p.m., Saul Wold wrote: On 6/3/20 2:01 PM, Scott Little wrote: No I don't think that would work. We can't have two versions of the same package competing for dominance within the mock build environments. i.e. on time pkg X builds against 13.2.2, the next time against 13.2.10. The outcome dependent on the vagaries of job scheduling, build speeds, and any other number of factors. If you compile against 13.2.10, will you run ok vs 13.2.2. I wouldn't want to bet on it. The build layering solution might be to throw it in it's own layer. Until we are 100% committed to build layering, we need to converge on ONE version of ceph. Ok, so one option is to move to Ceph 13.2.10 or drop the existing package list update that brings in the python3 and related Ceph packages. Do we need to at least revert that commit in-order to get the build working again? We might need to spend a few minutes to hash this out tomorrow morning at the PTG. Sau! Scott On 2020-06-03 10:52 a.m., Saul Wold wrote: On 6/3/20 1:47 AM, Liu, ZhipengS wrote: Hi Scott, For question #1, When we built openstack ussuri image which is python3 only. It needs python3-rbd and related dependency, so we add librados2-13.2.10 and related packages. For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. Shouldn’t we let the build choose local build first? Following up on this we need to be careful about which we choose, as I said in the other email is this a one-off issue or something that we see more of. So maybe an audit tool would help. Another option is moving these packages to container layer, add rpms_centos.lst in config/centos/flock/? I understand this option better after chatting with Zhipeng, I think this might be the best option adding the Updated Ceph / RBD related packages to the container list which will be used for the Usurri container builds but not by the platform OS. This would mean that the containers would have Ceph 13.2.10 related packages and the platform OS would be 13.2.2. Would that cause problems or stability issues? Sau! Thanks! Zhipeng *From:*Scott Little *Sent:* 2020年6月3日15:57 *To:* starlingx-discuss at lists.starlingx.io *Subject:* Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version. Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build. It doesn't build it itself. The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm. It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm. It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this. I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. If yes, do we still need whatever modifications were applied to librados2-13.2.2? Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst? If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jun 4 14:40:20 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 4 Jun 2020 14:40:20 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management In-Reply-To: References: Message-ID: <20200604144020.rqarmwxhzxngoj2v@yuggoth.org> On 2020-06-04 07:34:24 +0000 (+0000), Qi, Mingyuan wrote: > In Tuesday's PTG, I have introduced the proposal of a sub-project > for small node management: Hummingbird. [...] You may want to take care that it's not confused with https://opendev.org/openstack/swift/src/branch/feature/hummingbird/go (a reimplementation of OpenStack Swift's object-server in golang), but since that effort hasn't seen any activity in several years it's probable that not many people remember it anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From aj at suse.com Thu Jun 4 15:55:10 2020 From: aj at suse.com (Andreas Jaeger) Date: Thu, 4 Jun 2020 17:55:10 +0200 Subject: [Starlingx-discuss] Fwd: [docs][all] Important changes in recent openstackdocstheme updates In-Reply-To: <67de416d-8881-66f5-29d9-29069290e354@suse.com> References: <67de416d-8881-66f5-29d9-29069290e354@suse.com> Message-ID: <4b09df96-8a13-3acc-247f-8806b03016f7@suse.com> I pushed changes for all starlingx repos that use openstackdocstheme to update to newer version, see the attached email for a longer explanation that I send to the openstack list. Full set of changes is: https://review.opendev.org/#/q/topic:reno-openstackdocstheme+is:open+projects:starlingx If there are any questions, please reach out to me - otherwise, happy reviewing ;) Andreas -------- Forwarded Message -------- Subject: [docs][all] Important changes in recent openstackdocstheme updates Date: Wed, 20 May 2020 17:40:07 +0200 From: Andreas Jaeger Organization: SUSE Software Solutions Germany GmbH, Nuernberg; GF: Felix Imendörffer; HRB 247165 (AG München) To: openstack-discuss at lists.openstack.org CC: Stephen Finucane A couple of changes recently merged into openstackdocstheme to fix problems reported. These had some surprises in it and we'd like to inform you about the changes: * Config options are now prefixed with openstackdocs_, the old names will be removed in a future release * The 'project' config option is now only respected (and displayed in the left menu) if 'openstackdocs_auto_name = False' is set. By default, the theme uses the package name (from setup.cfg) * The HTML files show the version number by default (with exception of releasenotes and api docs) calculated from git. If you want to use your own version number or disable it, set 'openstackdocs_auto_version = False' and manually configure the 'version' and 'release' options. * Previously, the theme always used 'pygments_style = "native"' and overrode the setting of 'sphinx' that many repos have. Now the setting is respected. For a few repos this lead to unreadable code snippets. If you see this or want to go back to the previous theme, configure 'pygments_style = "native"'. * Many projects have written PDF documents. openstackdocstheme can now optionally link to them. Set 'openstackdocs_pdf_link' to True to show the icon with path. Note that the PDF file is placed on docs.openstack.org in the top of the html files while in check/gate it's in a separate PDF folder. Thus, the site preview will show in check/gate a broken link - but it works fine, check [2]. * Both reno (since version 3.1.0) and openstackdocsstheme are now declared parallel safe, the CI jobs automatically build releasenotes in parallel [1]. You can modify your local tox job to do this by adding the '-j auto' parameter to your 'sphinx-build' invocation. We're releasing openstackdocstheme version 2.2.1 soon with two further fixes: * PDF documents will now show the version number like html document, no need to configure versions in conf.py for this anymore [3]. * small bug fix (if you set auto_name = False in doc/source/conf.py, this hit so far 5 repos)[4]. Everything is documented in the documentation of openstackdocstheme [2]. If there are any questions, best ask in #openstack-oslo. Andreas has started pushing changes to update projects with topic:reno-openstackdocstheme. Hope that's all for Victoria on the openstackdocstheme, Stephen and Andreas [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014902.html [2] https://docs.openstack.org/openstackdocstheme/ [3] https://review.opendev.org/729554 [4] https://review.opendev.org/729031 -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From bruce.e.jones at intel.com Thu Jun 4 17:42:27 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 4 Jun 2020 17:42:27 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management In-Reply-To: References: Message-ID: Mingyuan, thank you for bringing this proposal forward. I'd like to explore the idea of creating a sub-project for this. Do you have an estimate as to which repos the project will be working in? Are there new repos to be created? Will the changes land in other sub-project areas? If so, which? If we can figure out where the code lands, that would help us figure out which existing sub-project (if any) should be the home for this code, or if a new sub-project is needed. brucej From: Qi, Mingyuan Sent: Thursday, June 4, 2020 12:34 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Hummingbird: A project for small node management Hi, In Tuesday's PTG, I have introduced the proposal of a sub-project for small node management: Hummingbird. I put the document link here[0] for community members who are interested in the detail info but haven't joined Tuesday PTG. The target of Hummingbird project is to bring the ability of edge node(small node) management to StarlingX. The project gets the name "Hummingbird" from hummingbird's characteristics: Tiny, stably hovering and echo to "starling". Here are 3 reasons that having Hummingbird as a sub-project: 1. A bunch of technical areas such as containerization, networking, storage and flock services are converged in Hummingbird. The implementation of Hummingbird needs to be well coordinated among these technologies. 2. Hummingbird will be developed in a long term across multiple releases. The development pace of delivering features for small node management could be well discussed in the form of a sub-project. 3. Current sub-projects are organized in fundamental technologies. A sub-project based on individual functionality comes from a different perspective, for example like the projects in Openstack. It will enhance the collaboration of the members in different technology background in the community. All above aim to bring the small node management to StarlingX in a well scheduled and quality ensured way. It's a brand new proposal and I hope you could find something interesting in the doc. Welcome your questions and inputs. [0] https://drive.google.com/file/d/1VpglICCzI_PSGdCC12Y7MzhSE8cAolTM/view?usp=sharing Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From maryx.camp at intel.com Thu Jun 4 19:49:37 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 4 Jun 2020 19:49:37 +0000 Subject: [Starlingx-discuss] Fwd: [docs][all] Important changes in recent openstackdocstheme updates In-Reply-To: <4b09df96-8a13-3acc-247f-8806b03016f7@suse.com> References: <67de416d-8881-66f5-29d9-29069290e354@suse.com> <4b09df96-8a13-3acc-247f-8806b03016f7@suse.com> Message-ID: Thanks Andreas! I have no experience with theme file updates, your help with the StarlingX docs is much appreciated. thanks again, Mary Camp PTIGlobal Technical Writer | maryx.camp at intel.com -----Original Message----- From: Andreas Jaeger Sent: Thursday, June 4, 2020 11:55 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Fwd: [docs][all] Important changes in recent openstackdocstheme updates I pushed changes for all starlingx repos that use openstackdocstheme to update to newer version, see the attached email for a longer explanation that I send to the openstack list. Full set of changes is: https://review.opendev.org/#/q/topic:reno-openstackdocstheme+is:open+projects:starlingx If there are any questions, please reach out to me - otherwise, happy reviewing ;) Andreas -------- Forwarded Message -------- Subject: [docs][all] Important changes in recent openstackdocstheme updates Date: Wed, 20 May 2020 17:40:07 +0200 From: Andreas Jaeger Organization: SUSE Software Solutions Germany GmbH, Nuernberg; GF: Felix Imendörffer; HRB 247165 (AG München) To: openstack-discuss at lists.openstack.org CC: Stephen Finucane A couple of changes recently merged into openstackdocstheme to fix problems reported. These had some surprises in it and we'd like to inform you about the changes: * Config options are now prefixed with openstackdocs_, the old names will be removed in a future release * The 'project' config option is now only respected (and displayed in the left menu) if 'openstackdocs_auto_name = False' is set. By default, the theme uses the package name (from setup.cfg) * The HTML files show the version number by default (with exception of releasenotes and api docs) calculated from git. If you want to use your own version number or disable it, set 'openstackdocs_auto_version = False' and manually configure the 'version' and 'release' options. * Previously, the theme always used 'pygments_style = "native"' and overrode the setting of 'sphinx' that many repos have. Now the setting is respected. For a few repos this lead to unreadable code snippets. If you see this or want to go back to the previous theme, configure 'pygments_style = "native"'. * Many projects have written PDF documents. openstackdocstheme can now optionally link to them. Set 'openstackdocs_pdf_link' to True to show the icon with path. Note that the PDF file is placed on docs.openstack.org in the top of the html files while in check/gate it's in a separate PDF folder. Thus, the site preview will show in check/gate a broken link - but it works fine, check [2]. * Both reno (since version 3.1.0) and openstackdocsstheme are now declared parallel safe, the CI jobs automatically build releasenotes in parallel [1]. You can modify your local tox job to do this by adding the '-j auto' parameter to your 'sphinx-build' invocation. We're releasing openstackdocstheme version 2.2.1 soon with two further fixes: * PDF documents will now show the version number like html document, no need to configure versions in conf.py for this anymore [3]. * small bug fix (if you set auto_name = False in doc/source/conf.py, this hit so far 5 repos)[4]. Everything is documented in the documentation of openstackdocstheme [2]. If there are any questions, best ask in #openstack-oslo. Andreas has started pushing changes to update projects with topic:reno-openstackdocstheme. Hope that's all for Victoria on the openstackdocstheme, Stephen and Andreas [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-May/014902.html [2] https://docs.openstack.org/openstackdocstheme/ [3] https://review.opendev.org/729554 [4] https://review.opendev.org/729031 -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Fri Jun 5 02:23:55 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 4 Jun 2020 22:23:55 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 399 - Failure! Message-ID: <226828596.1601.1591323835965.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 399 Status: Failure Timestamp: 20200605T021358Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200605T020038Z DOCKER_BUILD_ID: jenkins-master-flock-20200605T020038Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200605T020038Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Fri Jun 5 02:23:57 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 4 Jun 2020 22:23:57 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 134 - Still Failing! In-Reply-To: <1888841825.1592.1591237629065.JavaMail.javamailuser@localhost> References: <1888841825.1592.1591237629065.JavaMail.javamailuser@localhost> Message-ID: <683945570.1604.1591323838250.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 134 Status: Still Failing Timestamp: 20200605T020038Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From zhipengs.liu at intel.com Fri Jun 5 06:36:14 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 5 Jun 2020 06:36:14 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank, As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. My test step is: 1) ssh to standby controller and sudo reboot -f for it. 2) sudo reboot -f for activated controller All pods can resume after a while. However, I could reproduce this issue with DB 20200516T080009Z. From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. It includes below 2 patches which fixed this stability issue. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death [1] https://bugs.launchpad.net/starlingx/+bug/1881899 [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:35 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From mingyuan.qi at intel.com Fri Jun 5 07:13:57 2020 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Fri, 5 Jun 2020 07:13:57 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management In-Reply-To: References: Message-ID: Bruce, Thanks for you input, I've mapped the Hummingbird's components to repos as well as existing sub-projects below: Components of HB Related sub-project Landed in repos New personality Flock services project config Networking Networking project/ Security project integ Provisioning Containers project ansible-playbook Management Flock services project/ Containers project config/metal/fault Storage Non-openstack project TBD App orchestration Containers project TBD Dist-cloud collaboration Distributed cloud project distcloud As you can see, the components will be landed in multiple repos across multiple sub-projects. Mingyuan From: Jones, Bruce E Sent: Friday, June 5, 2020 1:42 To: Qi, Mingyuan ; starlingx-discuss at lists.starlingx.io Subject: RE: Hummingbird: A project for small node management Mingyuan, thank you for bringing this proposal forward. I'd like to explore the idea of creating a sub-project for this. Do you have an estimate as to which repos the project will be working in? Are there new repos to be created? Will the changes land in other sub-project areas? If so, which? If we can figure out where the code lands, that would help us figure out which existing sub-project (if any) should be the home for this code, or if a new sub-project is needed. brucej From: Qi, Mingyuan > Sent: Thursday, June 4, 2020 12:34 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Hummingbird: A project for small node management Hi, In Tuesday's PTG, I have introduced the proposal of a sub-project for small node management: Hummingbird. I put the document link here[0] for community members who are interested in the detail info but haven't joined Tuesday PTG. The target of Hummingbird project is to bring the ability of edge node(small node) management to StarlingX. The project gets the name "Hummingbird" from hummingbird's characteristics: Tiny, stably hovering and echo to "starling". Here are 3 reasons that having Hummingbird as a sub-project: 1. A bunch of technical areas such as containerization, networking, storage and flock services are converged in Hummingbird. The implementation of Hummingbird needs to be well coordinated among these technologies. 2. Hummingbird will be developed in a long term across multiple releases. The development pace of delivering features for small node management could be well discussed in the form of a sub-project. 3. Current sub-projects are organized in fundamental technologies. A sub-project based on individual functionality comes from a different perspective, for example like the projects in Openstack. It will enhance the collaboration of the members in different technology background in the community. All above aim to bring the small node management to StarlingX in a well scheduled and quality ensured way. It's a brand new proposal and I hope you could find something interesting in the doc. Welcome your questions and inputs. [0] https://drive.google.com/file/d/1VpglICCzI_PSGdCC12Y7MzhSE8cAolTM/view?usp=sharing Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Fri Jun 5 14:31:39 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 5 Jun 2020 14:31:39 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Zhipeng: This looks promising. Your theory is that the 2 openstack-helm-infra patches will fix the mariadb recovery issues. These 2 patches were merged in the openstack-helm-infra project in January and February of 2020. What would be good to know is what broke mariadb recovery between April of 2019 when Chris Friesen finished up his story [1] and our current loads today. The most likely explanation is the upversion of Train or the upversion to openstack-helm-infra done in November 2019 introduced the mariadb recovery issues. And then the openstack-helm folks found and fixed the issue earlier in 2020. If we had more time the preferred approach would be to merge just the openstack-helm-infra changes first to prove they address mariadb recovery and then in a separate commit merge Ussuri. But since you have validated that mariadb recovers with your Ussuri branch and this branch has these openstack-helm commits, I support letting Ussuri merge into stx.4.0. Frank [1] https://storyboard.openstack.org/#!/story/2004712 -----Original Message----- From: Liu, ZhipengS Sent: Friday, June 05, 2020 2:36 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. My test step is: 1) ssh to standby controller and sudo reboot -f for it. 2) sudo reboot -f for activated controller All pods can resume after a while. However, I could reproduce this issue with DB 20200516T080009Z. From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. It includes below 2 patches which fixed this stability issue. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death [1] https://bugs.launchpad.net/starlingx/+bug/1881899 [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:35 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Fri Jun 5 15:59:27 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 5 Jun 2020 15:59:27 +0000 Subject: [Starlingx-discuss] Hummingbird: A project for small node management In-Reply-To: References: Message-ID: Mingyuan, thank you. I've put this on the agenda for the next TSC call. brucej From: Qi, Mingyuan Sent: Friday, June 5, 2020 12:14 AM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: RE: Hummingbird: A project for small node management Bruce, Thanks for you input, I've mapped the Hummingbird's components to repos as well as existing sub-projects below: Components of HB Related sub-project Landed in repos New personality Flock services project config Networking Networking project/ Security project integ Provisioning Containers project ansible-playbook Management Flock services project/ Containers project config/metal/fault Storage Non-openstack project TBD App orchestration Containers project TBD Dist-cloud collaboration Distributed cloud project distcloud As you can see, the components will be landed in multiple repos across multiple sub-projects. Mingyuan From: Jones, Bruce E > Sent: Friday, June 5, 2020 1:42 To: Qi, Mingyuan >; starlingx-discuss at lists.starlingx.io Subject: RE: Hummingbird: A project for small node management Mingyuan, thank you for bringing this proposal forward. I'd like to explore the idea of creating a sub-project for this. Do you have an estimate as to which repos the project will be working in? Are there new repos to be created? Will the changes land in other sub-project areas? If so, which? If we can figure out where the code lands, that would help us figure out which existing sub-project (if any) should be the home for this code, or if a new sub-project is needed. brucej From: Qi, Mingyuan > Sent: Thursday, June 4, 2020 12:34 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Hummingbird: A project for small node management Hi, In Tuesday's PTG, I have introduced the proposal of a sub-project for small node management: Hummingbird. I put the document link here[0] for community members who are interested in the detail info but haven't joined Tuesday PTG. The target of Hummingbird project is to bring the ability of edge node(small node) management to StarlingX. The project gets the name "Hummingbird" from hummingbird's characteristics: Tiny, stably hovering and echo to "starling". Here are 3 reasons that having Hummingbird as a sub-project: 1. A bunch of technical areas such as containerization, networking, storage and flock services are converged in Hummingbird. The implementation of Hummingbird needs to be well coordinated among these technologies. 2. Hummingbird will be developed in a long term across multiple releases. The development pace of delivering features for small node management could be well discussed in the form of a sub-project. 3. Current sub-projects are organized in fundamental technologies. A sub-project based on individual functionality comes from a different perspective, for example like the projects in Openstack. It will enhance the collaboration of the members in different technology background in the community. All above aim to bring the small node management to StarlingX in a well scheduled and quality ensured way. It's a brand new proposal and I hope you could find something interesting in the doc. Welcome your questions and inputs. [0] https://drive.google.com/file/d/1VpglICCzI_PSGdCC12Y7MzhSE8cAolTM/view?usp=sharing Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Jun 5 17:39:36 2020 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 5 Jun 2020 10:39:36 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 399 - Failure! In-Reply-To: <226828596.1601.1591323835965.JavaMail.javamailuser@localhost> References: <226828596.1601.1591323835965.JavaMail.javamailuser@localhost> Message-ID: Is anyone looking into this build failure? Is this still related to the python3 packages and multiple versions of packages? If so, what's the next steps to resolve this? We have not had a successful build this week! Thanks Sau! On 6/4/20 7:23 PM, build.starlingx at gmail.com wrote: > Project: STX_build_pre_installer_layered > Build #: 399 > Status: Failure > Timestamp: 20200605T021358Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200605T020038Z > DOCKER_BUILD_ID: jenkins-master-flock-20200605T020038Z-builder > OS: centos > MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200605T020038Z/logs > FULL_BUILD: false > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200605T020038Z/logs > MASTER_JOB_NAME: STX_build_layer_flock_master_master > LAYER: flock > MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock > BUILD_ISO: true > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From zhipengs.liu at intel.com Sat Jun 6 01:30:00 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Sat, 6 Jun 2020 01:30:00 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: Hi Scott, We have updated the patch below as you see and fixed your comment as well, thanks! https://review.opendev.org/#/c/733426/ It has been verified by Chengde! Many thanks!! After this patch get merged, could you do me a favor to cherry pick below patches to check if OpenStack images build can be triggered successfully by cengn script? (glance, cinder, nova, horizon) https://review.opendev.org/#/c/712880/ Modify build-tools and stable-wheels for Ussuri upgrading https://review.opendev.org/#/c/712862/ Update openstack docker images for stable/ussuri You might need add below repo in your build script. --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ Thanks a lot! Zhipeng From: Liu, ZhipengS Sent: 2020年6月4日 22:36 To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! Hi Scott, For our OpenStack upgrade case, we may have one more option that is not adding this ceph 13.2.10 repo to local build repo folder. Instead, we add this ceph repo as a parameter when we run build-stx-base.sh. Then this repo only used by OpenStack build. We will verify it tomorrow. Thanks! Zhipeng From: Scott Little > Sent: 2020年6月4日 22:19 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! I see https://review.opendev.org/#/c/733426/9 has been posted With this update, layered builds should pass, and would look like this ... * Flock and iso builds will use 13.2.2. * All container builds uses 13.2.10. * Do we want 13.2.10 in ALL containers? * Any ceph dependent rpms from distro/flock builds that make it into a container (if any), will have been compiled against 13.2.2, but will run against 13.2.10. I'm more comfortable with a increment to the patch level than a decrement. I think we can live with this until we can move to 13.2.10 universally. Monolithic will continue to build, but will remain confused ... All lst files, including container layer lsts, are downloaded before any package is built. Most if not all packages that depend on ceph will build against 13.2.10 as mock/yum does not understand the 'prefer local'. build-iso will use 'prefer local' and ship with 13.2.2. The implications of which is unclear. One hopes that the interface is stable when the version diff is only at the patch level, but I never like to see shipped version LOWER than the complied against version. On 2020-06-03 6:08 p.m., Saul Wold wrote: On 6/3/20 2:01 PM, Scott Little wrote: No I don't think that would work. We can't have two versions of the same package competing for dominance within the mock build environments. i.e. on time pkg X builds against 13.2.2, the next time against 13.2.10. The outcome dependent on the vagaries of job scheduling, build speeds, and any other number of factors. If you compile against 13.2.10, will you run ok vs 13.2.2. I wouldn't want to bet on it. The build layering solution might be to throw it in it's own layer. Until we are 100% committed to build layering, we need to converge on ONE version of ceph. Ok, so one option is to move to Ceph 13.2.10 or drop the existing package list update that brings in the python3 and related Ceph packages. Do we need to at least revert that commit in-order to get the build working again? We might need to spend a few minutes to hash this out tomorrow morning at the PTG. Sau! Scott On 2020-06-03 10:52 a.m., Saul Wold wrote: On 6/3/20 1:47 AM, Liu, ZhipengS wrote: Hi Scott, For question #1, When we built openstack ussuri image which is python3 only. It needs python3-rbd and related dependency, so we add librados2-13.2.10 and related packages. For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. Shouldn’t we let the build choose local build first? Following up on this we need to be careful about which we choose, as I said in the other email is this a one-off issue or something that we see more of. So maybe an audit tool would help. Another option is moving these packages to container layer, add rpms_centos.lst in config/centos/flock/? I understand this option better after chatting with Zhipeng, I think this might be the best option adding the Updated Ceph / RBD related packages to the container list which will be used for the Usurri container builds but not by the platform OS. This would mean that the containers would have Ceph 13.2.10 related packages and the platform OS would be 13.2.2. Would that cause problems or stability issues? Sau! Thanks! Zhipeng *From:*Scott Little *Sent:* 2020年6月3日15:57 *To:* starlingx-discuss at lists.starlingx.io *Subject:* Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version. Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build. It doesn't build it itself. The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm. It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm. It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this. I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. If yes, do we still need whatever modifications were applied to librados2-13.2.2? Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst? If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sat Jun 6 01:58:13 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 5 Jun 2020 21:58:13 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 402 - Failure! Message-ID: <695418467.1608.1591408694247.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 402 Status: Failure Timestamp: 20200606T014803Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200606T013408Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200606T013408Z DOCKER_BUILD_ID: jenkins-master-flock-20200606T013408Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200606T013408Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200606T013408Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Sat Jun 6 01:58:15 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 5 Jun 2020 21:58:15 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 135 - Still Failing! In-Reply-To: <76095783.1602.1591323836489.JavaMail.javamailuser@localhost> References: <76095783.1602.1591323836489.JavaMail.javamailuser@localhost> Message-ID: <109858926.1611.1591408696472.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 135 Status: Still Failing Timestamp: 20200606T013408Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200606T013408Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Sun Jun 7 01:58:30 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 6 Jun 2020 21:58:30 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 405 - Failure! Message-ID: <701780729.1615.1591495111736.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 405 Status: Failure Timestamp: 20200607T014748Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T013413Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200607T013413Z DOCKER_BUILD_ID: jenkins-master-flock-20200607T013413Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T013413Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200607T013413Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Sun Jun 7 01:58:33 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 6 Jun 2020 21:58:33 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 136 - Still Failing! In-Reply-To: <1803572455.1609.1591408694767.JavaMail.javamailuser@localhost> References: <1803572455.1609.1591408694767.JavaMail.javamailuser@localhost> Message-ID: <169013534.1618.1591495113943.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 136 Status: Still Failing Timestamp: 20200607T013413Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T013413Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Sun Jun 7 23:28:12 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 7 Jun 2020 19:28:12 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 408 - Failure! Message-ID: <1355935677.1622.1591572492731.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 408 Status: Failure Timestamp: 20200607T231743Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T230408Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20200607T230408Z DOCKER_BUILD_ID: jenkins-master-flock-20200607T230408Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T230408Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20200607T230408Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Sun Jun 7 23:28:14 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 7 Jun 2020 19:28:14 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 137 - Still Failing! In-Reply-To: <1645969591.1616.1591495112325.JavaMail.javamailuser@localhost> References: <1645969591.1616.1591495112325.JavaMail.javamailuser@localhost> Message-ID: <2070081043.1625.1591572494885.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 137 Status: Still Failing Timestamp: 20200607T230408Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T230408Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From haochuan.z.chen at intel.com Mon Jun 8 02:21:39 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 8 Jun 2020 02:21:39 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: Message-ID: Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From agung at btech.id Mon Jun 8 02:31:12 2020 From: agung at btech.id (Rahmat Agung) Date: Mon, 8 Jun 2020 09:31:12 +0700 Subject: [Starlingx-discuss] ERROR when deploy stx-monitor. Message-ID: I try to deploy stx-monitor on 3 nworker nodes with label like this: ``` worker-3 Ready 2d18h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-client=enabled,elastic-controller=enabled,elastic-data=enabled,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-3,kubernetes.io/os=linux worker-4 Ready 2d18h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-client=enabled,elastic-controller=enabled,elastic-data=enabled,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-4,kubernetes.io/os=linux worker-5 Ready 2d16h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-5,kubernetes.io/os=linux ``` When I check logs: ``` us: <_Rendezvous of RPC that terminated with: status = StatusCode.UNKNOWN details = "release mon-kibana failed: timed out waiting for the condition" debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" > 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller Traceback (most recent call last): 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 473, in install_release 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller metadata=self.metadata) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller details = "release mon-kibana failed: timed out waiting for the condition" 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller > 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller 2020-06-07 14:07:21.199 7963 DEBUG armada.handlers.tiller [-] [chart=kibana]: Helm getting release status for release=mon-kibana, version=0 get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539 2020-06-07 14:07:21.402 7963 DEBUG armada.handlers.tiller [-] [chart=kibana]: GetReleaseStatus= name: "mon-kibana" info { status { code: FAILED } first_deployed { seconds: 1591538240 nanos: 977775758 } last_deployed { seconds: 1591538240 nanos: 977775758 } Description: "Release \"mon-kibana\" failed: timed out waiting for the condition" } namespace: "monitor" get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:547 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada [-] Chart deploy [kibana] failed: armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: mon-kibana - Tiller Message: b'Release "mon-kibana" failed: timed out waiting for the condition' 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada Traceback (most recent call last): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 473, in install_release 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada metadata=self.metadata) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada return _end_unary_response_blocking(state, call, False, None) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada raise _Rendezvous(state, None, None, deadline) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada status = StatusCode.UNKNOWN 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada details = "release mon-kibana failed: timed out waiting for the condition" 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada > 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada During handling of the above exception, another exception occurred: 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada Traceback (most recent call last): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 225, in handle_result 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada result = get_result() 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 236, in 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada if (handle_result(chart, lambda: deploy_chart(chart))): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 214, in deploy_chart 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada chart, cg_test_all_charts, prefix, known_releases) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 239, in execute 2020-06-07 14:07[402248.574350] serial8250: too much work for irq4 :21.404 7963 ERROR armada.handlers.armada timeout=timer) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 486, in install_release 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada raise ex.ReleaseException(release, status, 'Install') 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: mon-kibana - Tiller Message: b'Release "mon-kibana" failed: timed out waiting for the condition' 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.406 7963 ERROR armada.handlers.armada [-] Chart deploy(s) failed: ['kibana'] 2020-06-07 14:07:21.478 7963 INFO armada.handlers.lock [-] Releasing lock 2020-06-07 14:07:21.486 7963 ERROR armada.cli [-] Caught internal exception: armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kibana'] 2020-06-07 14:07:21.486 7963 ERROR armada.cli Traceback (most recent call last): 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke 2020-06-07 14:07:21.486 7963 ERROR armada.cli self.invoke() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 213, in invoke 2020-06-07 14:07:21.486 7963 ERROR armada.cli resp = self.handle(documents, tiller) 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper 2020-06-07 14:07:21.486 7963 ERROR armada.cli return future.result() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2020-06-07 14:07:21.486 7963 ERROR armada.cli return self.__get_result() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2020-06-07 14:07:21.486 7963 ERROR armada.cli raise self._exception 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2020-06-07 14:07:21.486 7963 ERROR armada.cli result = self.fn(*self.args, **self.kwargs) 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 256, in handle 2020-06-07 14:07:21.486 7963 ERROR armada.cli return armada.sync() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 252, in sync 2020-06-07 14:07:21.486 7963 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2020-06-07 14:07:21.486 7963 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kibana'] 2020-06-07 14:07:21.486 7963 ERROR armada.cli ``` What mean the error above? I just want to know, is stx-monitor stable or still experimental? Because I could not found documentation about it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Mon Jun 8 08:53:52 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 8 Jun 2020 08:53:52 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi Frank, It is not easy to figure out whether/how/when OpenStack-helm-info upstream introduce this issue and then fix it. I also could not find any fix in LP[1], which just mentioned that this intermittent issue not hit us after some changes in related field. Anyhow, below 2 patches should fix potential bug and I could not see the same error log again in our ussuri upgrade EB. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death Since we have passed fully test, we'd better push to merge ussuri upgrade/openstack-helm rebasing patches soon. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月5日 22:32 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This looks promising. Your theory is that the 2 openstack-helm-infra patches will fix the mariadb recovery issues. These 2 patches were merged in the openstack-helm-infra project in January and February of 2020. What would be good to know is what broke mariadb recovery between April of 2019 when Chris Friesen finished up his story [1] and our current loads today. The most likely explanation is the upversion of Train or the upversion to openstack-helm-infra done in November 2019 introduced the mariadb recovery issues. And then the openstack-helm folks found and fixed the issue earlier in 2020. If we had more time the preferred approach would be to merge just the openstack-helm-infra changes first to prove they address mariadb recovery and then in a separate commit merge Ussuri. But since you have validated that mariadb recovers with your Ussuri branch and this branch has these openstack-helm commits, I support letting Ussuri merge into stx.4.0. Frank [1] https://storyboard.openstack.org/#!/story/2004712 -----Original Message----- From: Liu, ZhipengS Sent: Friday, June 05, 2020 2:36 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. My test step is: 1) ssh to standby controller and sudo reboot -f for it. 2) sudo reboot -f for activated controller All pods can resume after a while. However, I could reproduce this issue with DB 20200516T080009Z. From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. It includes below 2 patches which fixed this stability issue. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death [1] https://bugs.launchpad.net/starlingx/+bug/1881899 [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:35 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Mon Jun 8 09:03:01 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 8 Jun 2020 09:03:01 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! In-Reply-To: References: <1066280169.1577.1591076238973.JavaMail.javamailuser@localhost> <2072851777.1587.1591151446912.JavaMail.javamailuser@localhost> <149e9a96-fb7c-cf34-0a76-230495d7d8da@linux.intel.com> <50e99851-bd06-af9b-e176-d6ef6e704df8@windriver.com> Message-ID: Hi Scott, After discussed with Chengde, In order not to introduce these packages version conflict in local mirror, we'd better revert the commit 44a8a1d798dc98d4f6ffcd200237c94585b31c40 with https://review.opendev.org/#/c/734035/ Please help to update cengn build script with below 2 additional repos. build-stx-base.sh --repo local-stx-build,... \ --repo stx-distro,... \ --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ Thanks! Zhipeng From: Liu, ZhipengS Sent: 2020年6月6日 9:30 To: 'Scott Little' ; 'starlingx-discuss at lists.starlingx.io' ; 'YuChengDe' Subject: RE: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! Hi Scott, We have updated the patch below as you see and fixed your comment as well, thanks! https://review.opendev.org/#/c/733426/ It has been verified by Chengde! Many thanks!! After this patch get merged, could you do me a favor to cherry pick below patches to check if OpenStack images build can be triggered successfully by cengn script? (glance, cinder, nova, horizon) https://review.opendev.org/#/c/712880/ Modify build-tools and stable-wheels for Ussuri upgrading https://review.opendev.org/#/c/712862/ Update openstack docker images for stable/ussuri You might need add below repo in your build script. --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ Thanks a lot! Zhipeng From: Liu, ZhipengS Sent: 2020年6月4日 22:36 To: Scott Little >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! Hi Scott, For our OpenStack upgrade case, we may have one more option that is not adding this ceph 13.2.10 repo to local build repo folder. Instead, we add this ceph repo as a parameter when we run build-stx-base.sh. Then this repo only used by OpenStack build. We will verify it tomorrow. Thanks! Zhipeng From: Scott Little > Sent: 2020年6月4日 22:19 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! I see https://review.opendev.org/#/c/733426/9 has been posted With this update, layered builds should pass, and would look like this ... * Flock and iso builds will use 13.2.2. * All container builds uses 13.2.10. * Do we want 13.2.10 in ALL containers? * Any ceph dependent rpms from distro/flock builds that make it into a container (if any), will have been compiled against 13.2.2, but will run against 13.2.10. I'm more comfortable with a increment to the patch level than a decrement. I think we can live with this until we can move to 13.2.10 universally. Monolithic will continue to build, but will remain confused ... All lst files, including container layer lsts, are downloaded before any package is built. Most if not all packages that depend on ceph will build against 13.2.10 as mock/yum does not understand the 'prefer local'. build-iso will use 'prefer local' and ship with 13.2.2. The implications of which is unclear. One hopes that the interface is stable when the version diff is only at the patch level, but I never like to see shipped version LOWER than the complied against version. On 2020-06-03 6:08 p.m., Saul Wold wrote: On 6/3/20 2:01 PM, Scott Little wrote: No I don't think that would work. We can't have two versions of the same package competing for dominance within the mock build environments. i.e. on time pkg X builds against 13.2.2, the next time against 13.2.10. The outcome dependent on the vagaries of job scheduling, build speeds, and any other number of factors. If you compile against 13.2.10, will you run ok vs 13.2.2. I wouldn't want to bet on it. The build layering solution might be to throw it in it's own layer. Until we are 100% committed to build layering, we need to converge on ONE version of ceph. Ok, so one option is to move to Ceph 13.2.10 or drop the existing package list update that brings in the python3 and related Ceph packages. Do we need to at least revert that commit in-order to get the build working again? We might need to spend a few minutes to hash this out tomorrow morning at the PTG. Sau! Scott On 2020-06-03 10:52 a.m., Saul Wold wrote: On 6/3/20 1:47 AM, Liu, ZhipengS wrote: Hi Scott, For question #1, When we built openstack ussuri image which is python3 only. It needs python3-rbd and related dependency, so we add librados2-13.2.10 and related packages. For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2. Shouldn’t we let the build choose local build first? Following up on this we need to be careful about which we choose, as I said in the other email is this a one-off issue or something that we see more of. So maybe an audit tool would help. Another option is moving these packages to container layer, add rpms_centos.lst in config/centos/flock/? I understand this option better after chatting with Zhipeng, I think this might be the best option adding the Updated Ceph / RBD related packages to the container list which will be used for the Usurri container builds but not by the platform OS. This would mean that the containers would have Ceph 13.2.10 related packages and the platform OS would be 13.2.2. Would that cause problems or stability issues? Sau! Thanks! Zhipeng *From:*Scott Little *Sent:* 2020年6月3日15:57 *To:* starlingx-discuss at lists.starlingx.io *Subject:* Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing! This was an interesting one. We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time. A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer. Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version. Now that policy is open for debate, but that is what it does. Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso. Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build. It doesn't build it itself. The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm. It selects librados2-13.2.10-0.el7.x86_64.rpm. The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm. It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm. A flock layer build-iso should have caught this. I suspect build-iso was only performed on a monolithic build. Open questions. 1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. If yes, do we still need whatever modifications were applied to librados2-13.2.2? Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against? 2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst? If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way. Scott On 2020-06-02 10:30 p.m., build.starlingx at gmail.com wrote: Project: STX_build_layer_flock_master_master Build #: 132 Status: Still Failing Timestamp: 20200603T020359Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Mon Jun 8 09:59:46 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Mon, 8 Jun 2020 11:59:46 +0200 Subject: [Starlingx-discuss] Subcloud on a Virtual Machine In-Reply-To: References: Message-ID: Hi all. Any thoughts on this? Also has anyone ever tried this solution with StarlingX on Virtual Machine at all? Cheers On Wed, Jun 3, 2020 at 9:05 PM Alfredo De Luca wrote: > Hi all. > For testing purposes we are trying to install a subcloud on a VM > (Openstack to be precise) but we get a couple of errors as below. Booting > from an ISO (STX 3.0) we get this > > 1. ERROR: Specified installation (sda) or boot (sda) device is invalid. > then I supposed the ISO is looking for a device *sda* .. so we fixed that > but then another issue occurred and the error now is > 2. Disk "" given in clearpart command does not exist. > Now I wonder if it is possible to install that on top of a VM and also > what could it the fix for the second error. > Any idea/clue? > > Cheers > > > -- > */Alfredo* > > -- */Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Jun 8 14:02:49 2020 From: scott.little at windriver.com (Scott Little) Date: Mon, 8 Jun 2020 10:02:49 -0400 Subject: [Starlingx-discuss] [Build] A new way to test you package's dependencies Message-ID: We now have a new command to test a package for its build dependencies.  It should be used when ever you upversion a package, or make significant changes to it's build scripts (spec files, make files, auto-config ...)    build-pkgs --dep-test It should be used when ever you upversion a package, or make significant changes to it's build scripts (spec files, make files, auto-config ...) Note: This should only be used following a full build-pkgs.  i.e. You need to be sure that an dependencies that we also build are available. One might think that if your package passes a full build (build-pkgs), that you are safe, but this is NOT the case.  When doing a full build, we don't wipe the build environment clean between packages.  This means that the environment might (or might not) have a tool or library present that your package needs, but fails to list as a BuildRequires in its spec file.  It will build successfully one time, but night not build the next.  It all depends on what packages were scheduled to build in the same environment before the package of interest. The --dep-test option rebuilds just one package in a clean environment, providing an effective test of the BuildRequires for your package. -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Mon Jun 8 15:46:22 2020 From: helena at openstack.org (helena at openstack.org) Date: Mon, 8 Jun 2020 11:46:22 -0400 (EDT) Subject: [Starlingx-discuss] StarlingX Glossary Message-ID: <1591631182.557525360@apps.rackspace.com> Greetings StarlingX Community! We are on a mission to create a glossary of StarlingX related terms and want your help! As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Similarly, having a glossary of terms has proven to be a good SEO tactic to gain more web traffic; by creating this glossary, we are hoping to have greater visibility to potential contributors, users, and supporting organizations. This is where you come in! We need your help to define the terms that we can use to educate future contributors Below is an etherpad link. We ask that you add, edit, review, and collaborate on this etherpad to help us make the StarlingX community more accessible and understandable. If you think of more terms to add to the list, please do! As always, feel free to reach out with any questions. Cheers, Helena Spease StarlingX: [ https://etherpad.opendev.org/p/StarlingX_Glossary ]( https://etherpad.opendev.org/p/StarlingX_Glossary ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Jun 8 16:00:05 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 8 Jun 2020 16:00:05 +0000 Subject: [Starlingx-discuss] StarlingX Glossary In-Reply-To: <1591631182.557525360@apps.rackspace.com> References: <1591631182.557525360@apps.rackspace.com> Message-ID: Hi Helena. I think we already have a good start on this in the StarlingX documentation [1]. I suggest we focus on improving that glossary instead of starting a new one. Brucej [1] https://docs.starlingx.io/introduction/terms.html From: helena at openstack.org Sent: Monday, June 8, 2020 8:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Glossary Greetings StarlingX Community! We are on a mission to create a glossary of StarlingX related terms and want your help! As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Similarly, having a glossary of terms has proven to be a good SEO tactic to gain more web traffic; by creating this glossary, we are hoping to have greater visibility to potential contributors, users, and supporting organizations. This is where you come in! We need your help to define the terms that we can use to educate future contributors Below is an etherpad link. We ask that you add, edit, review, and collaborate on this etherpad to help us make the StarlingX community more accessible and understandable. If you think of more terms to add to the list, please do! As always, feel free to reach out with any questions. Cheers, Helena Spease StarlingX: https://etherpad.opendev.org/p/StarlingX_Glossary -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Jun 8 16:49:44 2020 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 8 Jun 2020 16:49:44 +0000 Subject: [Starlingx-discuss] StarlingX R4.0 In-Reply-To: <32416eee-3bf5-56eb-c66b-13f103c67769@kunet.com> References: <32416eee-3bf5-56eb-c66b-13f103c67769@kunet.com> Message-ID: Hello Ammar, Welcome to the StarlingX project! This link should be your starting point: https://www.starlingx.io/ From there, you can access the various community communication channels: https://www.starlingx.io/community/ as well as links to the software https://www.starlingx.io/software/ The latest StarlingX official release is stx.3.0: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0/ The community is working actively on the next release: stx.4.0 which will be released in mid-July. I recommend that you use the starlingx mailing list (cc'd) for any further inquiries. Best Regards, Ghada -----Original Message----- From: Ammar T. Al-Sayegh [mailto:ammar at kunet.com] Sent: Monday, June 08, 2020 8:44 AM To: Khalil, Ghada Subject: StarlingX R4.0 Dear Ghada, I am planning to adopt StarlingX for building an edge cloud for my business. Would you be able to kindly give me access to the latest release of the system? Thank you very much. Dr. Ammar T. Al-Sayegh General Manager, KUNet From scott.little at windriver.com Mon Jun 8 19:06:31 2020 From: scott.little at windriver.com (Scott Little) Date: Mon, 8 Jun 2020 15:06:31 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 137 - Still Failing! In-Reply-To: <2070081043.1625.1591572494885.JavaMail.javamailuser@localhost> References: <1645969591.1616.1591495112325.JavaMail.javamailuser@localhost> <2070081043.1625.1591572494885.JavaMail.javamailuser@localhost> Message-ID: The offending update has been backed out. Flock layer build #138 was a success. Scott On 2020-06-07 7:28 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_layer_flock_master_master > Build #: 137 > Status: Still Failing > Timestamp: 20200607T230408Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200607T230408Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Mon Jun 8 19:17:32 2020 From: allison at openstack.org (Allison Price) Date: Mon, 8 Jun 2020 14:17:32 -0500 Subject: [Starlingx-discuss] OSF Community Meeting - June 25 & 26 Message-ID: <50093F4B-FD92-4CA1-A03F-55DE0A8F2C3D@openstack.org> Hi everyone, On June 25 (1300 UTC) and June 26 (0200 UTC) , we will be holding the quarterly OSF community [1] that will cover project updates from all OSF-supported projects and events. The StarlingX community is encouraged to prepare a slide and present a 3-5 minute update on the project and community’s progress. The update should cover updates that have occurred since the last community meeting on April 2. If you would like to volunteer to present the StarlingX update for one meeting (or both!) please sign up here [1]. We are aiming to finalize the content by Friday, June 19. If you missed the Q1 community meeting, you can see how the upcoming meeting will be structured in this recording [2] and this slide deck [3]. If you have any questions, please let me know. Thanks! Allison [1] https://etherpad.opendev.org/p/OSF_Community_Meeting_Q2 [2] https://zoom.us/rec/share/7vVXdIvopzxIYbPztF7SVpAKXYnbX6a82iMaqfZfmEl1b0Fqb6j3Zh47qPSV_ar2 [3] https://docs.google.com/presentation/d/1l05skj_BCfF8fgYWu4n0b1rQmbNhHp8sMeYcb-v-rdA/edit#slide=id.g82b6d187d5_0_525 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Jun 9 02:21:27 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 9 Jun 2020 02:21:27 +0000 Subject: [Starlingx-discuss] No StarlingX Containerization meeting --> offline update instead Message-ID: There will not be a meeting on Tuesday June 9. Instead status of the stx.4.0 containerization features has been updated on the etherpad [1]. If anyone else has any updates or topics for discussion please add an update to the etherpad. Frank Containers PL [1] https://etherpad.opendev.org/p/stx-containerization -------------- next part -------------- An HTML attachment was scrubbed... URL: From maryx.camp at intel.com Tue Jun 9 02:24:39 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Tue, 9 Jun 2020 02:24:39 +0000 Subject: [Starlingx-discuss] StarlingX Glossary In-Reply-To: References: <1591631182.557525360@apps.rackspace.com> Message-ID: Hi Helena, I am the Project Lead for StarlingX docs and I am happy to help with updates to the Terms list that Brucej linked below. Please ping me if you have questions. thanks, Mary Camp PTIGlobal Technical Writer | maryx.camp at intel.com From: Jones, Bruce E Sent: Monday, June 8, 2020 12:00 PM To: helena at openstack.org; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Glossary Hi Helena. I think we already have a good start on this in the StarlingX documentation [1]. I suggest we focus on improving that glossary instead of starting a new one. Brucej [1] https://docs.starlingx.io/introduction/terms.html From: helena at openstack.org > Sent: Monday, June 8, 2020 8:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Glossary Greetings StarlingX Community! We are on a mission to create a glossary of StarlingX related terms and want your help! As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Similarly, having a glossary of terms has proven to be a good SEO tactic to gain more web traffic; by creating this glossary, we are hoping to have greater visibility to potential contributors, users, and supporting organizations. This is where you come in! We need your help to define the terms that we can use to educate future contributors Below is an etherpad link. We ask that you add, edit, review, and collaborate on this etherpad to help us make the StarlingX community more accessible and understandable. If you think of more terms to add to the list, please do! As always, feel free to reach out with any questions. Cheers, Helena Spease StarlingX: https://etherpad.opendev.org/p/StarlingX_Glossary -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jun 9 08:38:34 2020 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 9 Jun 2020 08:38:34 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 6/10/2020 Message-ID: Hi All: Agenda for 6/10 meeting: - PTG update: https://etherpad.opendev.org/p/stx-virtual-PTG-June - ceph containerization: - centos8 and python3 - bugs: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other - open: If have any other topic, feel free to add to https://etherpad.openstack.org/p/stx-distro-other Thanks. BR Austin Sun. From zhipengs.liu at intel.com Tue Jun 9 08:39:30 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 9 Jun 2020 08:39:30 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <3e069a86-881b-2d3e-b743-95833b3040cb@linux.intel.com> Message-ID: Hi all, So far, all block issues and concerns have been addressed. Since we have passed all sanity test, and Ussuri OpenStack has been officially released last month, there should be no more reason to block these patches merge. Next step: Let's push to get ussuri upgrade/openstack-helm rebasing patches merged. We need great help from core guys! https://review.opendev.org/#/q/topic:for_ussuri+(status:open) # Below 6 patches are for OpenStack-helm/infra rebase. (we set first patch with workflow-1 and add depends-on for other patches as we need to merge them together.) Upgrade openstack-helm-infra zhipeng liu starlingx/openstack-armada-app workflow-1 Add mariadb database config override to support ipv6 zhipeng liu starlingx/openstack-armada-app Fix render error in cinder during openstack-helm rebase zhipeng liu starlingx/openstack-armada-app Update download list for openstack-helm upgrade zhipeng liu starlingx/openstack-armada-app Update manifest.yaml file for openstack-helm upgrade. zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm zhipeng liu starlingx/openstack-armada-app # Below 3 patches is for OpenStack upgrade. Update manifest.yaml file for ussuri openstack YU CHENGDE starlingx/openstack-armada-app Modify build-tools and stable-wheels for Ussuri upgrading YU CHENGDE starlingx/root Upgrade openstack docker images for stable/ussuri YU CHENGDE starlingx/upstream After removing required python3 dependent packages from local, we can build out base image and OpenStack service images successfully with below command. =============================================================================== @Scott, please help to update cengn build script with below 2 additional repos and help to trigger image build build-stx-base.sh --repo local-stx-build,... \ --repo stx-distro,... \ --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ Thanks a lot! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月8日 16:54 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, It is not easy to figure out whether/how/when OpenStack-helm-info upstream introduce this issue and then fix it. I also could not find any fix in LP[1], which just mentioned that this intermittent issue not hit us after some changes in related field. Anyhow, below 2 patches should fix potential bug and I could not see the same error log again in our ussuri upgrade EB. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death Since we have passed fully test, we'd better push to merge ussuri upgrade/openstack-helm rebasing patches soon. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月5日 22:32 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This looks promising. Your theory is that the 2 openstack-helm-infra patches will fix the mariadb recovery issues. These 2 patches were merged in the openstack-helm-infra project in January and February of 2020. What would be good to know is what broke mariadb recovery between April of 2019 when Chris Friesen finished up his story [1] and our current loads today. The most likely explanation is the upversion of Train or the upversion to openstack-helm-infra done in November 2019 introduced the mariadb recovery issues. And then the openstack-helm folks found and fixed the issue earlier in 2020. If we had more time the preferred approach would be to merge just the openstack-helm-infra changes first to prove they address mariadb recovery and then in a separate commit merge Ussuri. But since you have validated that mariadb recovers with your Ussuri branch and this branch has these openstack-helm commits, I support letting Ussuri merge into stx.4.0. Frank [1] https://storyboard.openstack.org/#!/story/2004712 -----Original Message----- From: Liu, ZhipengS Sent: Friday, June 05, 2020 2:36 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Friesen, Chris Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. My test step is: 1) ssh to standby controller and sudo reboot -f for it. 2) sudo reboot -f for activated controller All pods can resume after a while. However, I could reproduce this issue with DB 20200516T080009Z. From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. It includes below 2 patches which fixed this stability issue. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death [1] https://bugs.launchpad.net/starlingx/+bug/1881899 [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:35 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: This is not a new requirement. Users expect the software to recover when resets occur. As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Frank, Have we pass this case before? Is it a new requirement? Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 22:12 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! We used a build from May 28. As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年6月3日 8:55 To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. Frank -----Original Message----- From: Miller, Frank Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. Frank -----Original Message----- From: Liu, ZhipengS Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月2日 16:48 To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank and all, Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right? For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome! As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. [1] https://bugs.launchpad.net/starlingx/+bug/1855474 Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月29日 1:07 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Thanks Zhipeng. Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? Frank -----Original Message----- From: Liu, ZhipengS Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, Nicolae already added test case description. Thanks Nicolae! I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/ Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月27日 22:43 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios Frank -----Original Message----- From: Liu, ZhipengS Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Frank, We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] 2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ================================================================================================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= ===== Test Iteration 0 (single-execution) =================================================================================================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676 3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! Zhipeng -----Original Message----- From: Miller, Frank Sent: 2020年5月26日 21:13 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Zhipeng: Can you publish the list of tests that have been run for openstack? Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting? Frank -----Original Message----- From: Liu, ZhipengS Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment. Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) BRs Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi all, Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Agree! -----Original Message----- From: Saul Wold Sent: 2020年5月9日 0:29 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. Full Stop! Sau! On 5/8/20 9:05 AM, Miller, Frank wrote: > Until we can get sanity passing for several days in a row I strongly > suggest we do not allow any further changes into the load related to > OpenStack.  Folks can continue with reviews but let’s hold off > allowing merges related to a new OpenStack version. > > Frank > > *From:*Liu, ZhipengS > *Sent:* Friday, May 08, 2020 11:59 AM > *To:* starlingx-discuss > *Cc:* YU CHENGDE ; Penney, Don > > *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call > for patch review!! > > Hi all, > > Please help to review OpenStack Ussuri upgrade patches. > > Our target is to get all below patches merged by end of next week. > > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status > :merged) > > During OpenStack upgrade for StarlingX, we have to move python2.7 to > python3.6 for OpenStack services as ussuri release only support python3. > > We also rebased openstack-helm/helm-infra to latest version. > > Engineering build test status. > > 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. > 2. nfv_scenario_tests PASS on simplex bare metal setup. > 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. > > Thanks! > > Zhipeng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Dan.Voiculeasa at windriver.com Tue Jun 9 09:54:22 2020 From: Dan.Voiculeasa at windriver.com (Voiculeasa, Dan) Date: Tue, 9 Jun 2020 09:54:22 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: , Message-ID: Hello Martin, I didn't encounter that issue when testing, but also I didn't test recently without ceph backend. Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph? Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z Sent: Monday, June 8, 2020 5:21 AM To: Voiculeasa, Dan ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory\ncp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory\ncp: cannot stat ‘>’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/tmp/hieradata/192.188.204.3.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory", "cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory", "cp: cannot stat ‘>’: No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Tue Jun 9 13:04:38 2020 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 9 Jun 2020 06:04:38 -0700 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: Message-ID: Frank, Scott, Davelet: Are there cycles available on Cengn (and people resources) to do a Cengn build with the Ussuri patch set applied? I know this is different than a branch build. I think we have done this kind of thing in the past. This might help to make sure we don't have any more Cengn build issues and could give the Test team a sanity spin with a Ussuri/Cengn build. Note there is a comment for Scott/Davelet at the bottom of Zhipeng's email. Thanks Sau! On 6/9/20 1:39 AM, Liu, ZhipengS wrote: > Hi all, > > So far, all block issues and concerns have been addressed. > Since we have passed all sanity test, and Ussuri OpenStack has been officially released last month, > there should be no more reason to block these patches merge. > > Next step: > Let's push to get ussuri upgrade/openstack-helm rebasing patches merged. We need great help from core guys! > https://review.opendev.org/#/q/topic:for_ussuri+(status:open) > > # Below 6 patches are for OpenStack-helm/infra rebase. (we set first patch with workflow-1 and add depends-on for other patches as we need to merge them together.) > Upgrade openstack-helm-infra zhipeng liu starlingx/openstack-armada-app workflow-1 > Add mariadb database config override to support ipv6 zhipeng liu starlingx/openstack-armada-app > Fix render error in cinder during openstack-helm rebase zhipeng liu starlingx/openstack-armada-app > Update download list for openstack-helm upgrade zhipeng liu starlingx/openstack-armada-app > Update manifest.yaml file for openstack-helm upgrade. zhipeng liu starlingx/openstack-armada-app > Upgrade openstack-helm zhipeng liu starlingx/openstack-armada-app > > # Below 3 patches is for OpenStack upgrade. > Update manifest.yaml file for ussuri openstack YU CHENGDE starlingx/openstack-armada-app > Modify build-tools and stable-wheels for Ussuri upgrading YU CHENGDE starlingx/root > Upgrade openstack docker images for stable/ussuri YU CHENGDE starlingx/upstream > > > After removing required python3 dependent packages from local, we can build out base image and OpenStack service images successfully with below command. > =============================================================================== > @Scott, please help to update cengn build script with below 2 additional repos and help to trigger image build > build-stx-base.sh > --repo local-stx-build,... \ > --repo stx-distro,... \ > --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ > --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > Thanks a lot! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月8日 16:54 > To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Friesen, Chris > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > It is not easy to figure out whether/how/when OpenStack-helm-info upstream introduce this issue and then fix it. > I also could not find any fix in LP[1], which just mentioned that this intermittent issue not hit us after some changes in related field. > > Anyhow, below 2 patches should fix potential bug and I could not see the same error log again in our ussuri upgrade EB. > https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death > > Since we have passed fully test, we'd better push to merge ussuri upgrade/openstack-helm rebasing patches soon. > https://review.opendev.org/#/q/topic:for_ussuri+(status:open) > > [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ > > Thanks! > Zhipeng > -----Original Message----- > From: Miller, Frank > Sent: 2020年6月5日 22:32 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Friesen, Chris > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > This looks promising. Your theory is that the 2 openstack-helm-infra patches will fix the mariadb recovery issues. These 2 patches were merged in the openstack-helm-infra project in January and February of 2020. What would be good to know is what broke mariadb recovery between April of 2019 when Chris Friesen finished up his story [1] and our current loads today. The most likely explanation is the upversion of Train or the upversion to openstack-helm-infra done in November 2019 introduced the mariadb recovery issues. And then the openstack-helm folks found and fixed the issue earlier in 2020. > > If we had more time the preferred approach would be to merge just the openstack-helm-infra changes first to prove they address mariadb recovery and then in a separate commit merge Ussuri. But since you have validated that mariadb recovers with your Ussuri branch and this branch has these openstack-helm commits, I support letting Ussuri merge into stx.4.0. > > Frank > [1] https://storyboard.openstack.org/#!/story/2004712 > > -----Original Message----- > From: Liu, ZhipengS > Sent: Friday, June 05, 2020 2:36 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Friesen, Chris > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. > My test step is: > 1) ssh to standby controller and sudo reboot -f for it. > 2) sudo reboot -f for activated controller All pods can resume after a while. > > However, I could reproduce this issue with DB 20200516T080009Z. > From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year. > > In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. > It includes below 2 patches which fixed this stability issue. > https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death > > [1] https://bugs.launchpad.net/starlingx/+bug/1881899 > [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年6月3日 22:35 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > This is not a new requirement. Users expect the software to recover when resets occur. > > As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered. > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Wednesday, June 03, 2020 10:28 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Frank, > > Have we pass this case before? Is it a new requirement? > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年6月3日 22:12 > To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899 > > Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation. > > Frank > > -----Original Message----- > From: Miller, Frank > Sent: Tuesday, June 02, 2020 10:38 PM > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > We used a build from May 28. > > As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly. > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Tuesday, June 02, 2020 10:04 PM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > Thanks for your quick update! > Which build are you using to test this case? > Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. > BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹ > > Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? > I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers. > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年6月3日 8:55 > To: Miller, Frank ; Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue. > > Frank > > -----Original Message----- > From: Miller, Frank > Sent: Tuesday, June 02, 2020 12:25 PM > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues. > > In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353 > > The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that. > > But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this. > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Tuesday, June 02, 2020 11:47 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > For LP https://bugs.launchpad.net/starlingx/+bug/1881454 > Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. > We should fix this regression ASAP! > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月2日 16:48 > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Church, Robert > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank and all, > > Update for issue 2. > I raised a new LP to track it. > https://bugs.launchpad.net/starlingx/+bug/1881722 > Below is the time statistics. It seems reasonable. No obvious issue found. > 1) 3~4min for host restart and get ready. > 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) > 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) > 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment? > > For LP https://bugs.launchpad.net/starlingx/+bug/1881454 > Unable to unlock controller after swact and lock w/ openstack applied > And https://bugs.launchpad.net/starlingx/+bug/1881711 > system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. > Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks! > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年6月1日 16:20 > To: 'Miller, Frank' ; 'starlingx-discuss at lists.starlingx.io' ; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > I also tested the issue 2 with latest daily build on duplex setup. > The conclusion is that the issue is there all the time. > This issue might not be fixed soon, but should not block OpenStack upgrade, right? > > For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. > https://review.opendev.org/#/q/topic:for_ussuri+(status:open) > Your review and comments are welcome! > > As for issue 2, some detail info FYI. > It also needs to wait for around 10 min before all pods are ready again after reboot for master build. > It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. > neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) > openvswitch-db-8fxkw > Related key logs below. > Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition > Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition > Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition > Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition > Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) > Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) > > Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? > Your comment is appreciated! > > Thanks! > Zhipeng > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年5月29日 9:42 > To: 'Miller, Frank' ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > Glad to see your quick reply!! > For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step? > > For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right? > > According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression. > > [1] https://bugs.launchpad.net/starlingx/+bug/1855474 > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年5月29日 1:07 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Thanks Zhipeng. > > Good to see progress on IPv6. > Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage? > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Thursday, May 28, 2020 5:06 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > Nicolae already added test case description. Thanks Nicolae! > > I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. > No issue found, but just need to wait for around 10 min before all pods are ready again after reboot. > > For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. > https://review.opendev.org/#/c/731461/ > https://review.opendev.org/#/c/731470/ > > Thanks! > Zhipeng > > -----Original Message----- > From: Miller, Frank > Sent: 2020年5月27日 22:43 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do? > > For the controller reset testcases I'd like to see the test result for the following: > Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: > - Lock/unlock of standby controller > - reset (ie: reboot -f) of the standby controller > - reset (ie: reboot -f) of the active controller > - reapply of stx-openstack after the above scenarios > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Wednesday, May 27, 2020 9:15 AM > To: Miller, Frank ; starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi Frank, > > We have done below tests. > 1) Sanity tests by Nicolae. > AIO - Simplex > Setup 04 TCs [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 49 TCs [PASS] > Sanity Platform 07 TCs [PASS] > > TOTAL: [ 61 TCs ] > > AIO - Duplex > Setup 04 TCs [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 52 TCs [PASS] > Sanity Platform 07 TCs [PASS] > > TOTAL: [ 64 TCs ] > > Standard - Local Storage (2+2) > Setup 04 TCs [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 52 TCs [PASS] > Sanity Platform 08 TCs [PASS] > > TOTAL: [ 65 TCs ] > > Standard External - Dedicated Storage (2+2+2) > Setup 04 TCs [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 52 TCs [PASS] > Sanity Platform 09 TCs [PASS] > > TOTAL: [ 66 TCs ] > > 2) NFV scenario test by me > on duplex/multi standard virtual setup > duplex bare metal setup > ===== Setup ================================================================================================================================= > 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] > 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] > 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] > 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] > 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] > 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] > 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] > 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] > 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] > 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] > 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] > 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] > 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] > 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] > 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] > 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] > 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] > 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] > 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] > 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] > 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] > 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] > 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] > 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ============================================================================================================================================= > ===== Test Iteration 0 (single-execution) =================================================================================================== > 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) > 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) > 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) > 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) > 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) > 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) > 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) > 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) > 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) > 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) > 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) > 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) > 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) > 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) > 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) > 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) > Total-Tests: 16 Execution-Time: 0:16:11.676 > > 3) Another 2 test > a) Using IPv6 > It can pass with workaround now. I need one more fix for it. > In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below > config_override: | > [mysqld] > bind_address=:: > However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" > I tried many methods, but could not remove the first line in 20-override.cnf > mysql at mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf > |- > [mysqld] > bind_address=:: > I can only add it in manifest.yaml as a static override like below. > values: > conf: > database: > config_override: | > [mysqld] > bind_address=:: > > b) Reset of controllers and check status of OpenStack while a controller is rebooting. > I have tested it and pass on simplex. > For duplex, I have a setup issue in my side. > @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks! > > Zhipeng > > > > -----Original Message----- > From: Miller, Frank > Sent: 2020年5月26日 21:13 > To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Zhipeng: > > Can you publish the list of tests that have been run for openstack? > > Also has openstack been tested for the following scenarios: > 1) Using IPv6 > 2) Reset of controllers and check status of openstack while a controller is rebooting? > > Frank > > -----Original Message----- > From: Liu, ZhipengS > Sent: Monday, May 25, 2020 3:14 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi all, > > We have passed all sanity test on all setup. Thanks Nicolae!! > We also built out OpenStack service images from layered build environment. > > Please help to review and push below patches to be merged, thanks! > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) > > BRs > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年5月14日 16:49 > To: 'Saul Wold' ; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Hi all, > > Call for patch review again! > https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS > Sent: 2020年5月9日 8:38 > To: Saul Wold ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > Agree! > > -----Original Message----- > From: Saul Wold > Sent: 2020年5月9日 0:29 > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! > > I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green. > > Full Stop! > > Sau! > > > On 5/8/20 9:05 AM, Miller, Frank wrote: >> Until we can get sanity passing for several days in a row I strongly >> suggest we do not allow any further changes into the load related to >> OpenStack.  Folks can continue with reviews but let’s hold off >> allowing merges related to a new OpenStack version. >> >> Frank >> >> *From:*Liu, ZhipengS >> *Sent:* Friday, May 08, 2020 11:59 AM >> *To:* starlingx-discuss >> *Cc:* YU CHENGDE ; Penney, Don >> >> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> Please help to review OpenStack Ussuri upgrade patches. >> >> Our target is to get all below patches merged by end of next week. >> >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >> :merged) >> >> During OpenStack upgrade for StarlingX, we have to move python2.7 to >> python3.6 for OpenStack services as ussuri release only support python3. >> >> We also rebased openstack-helm/helm-infra to latest version. >> >> Engineering build test status. >> >> 1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >> 2. nfv_scenario_tests PASS on simplex bare metal setup. >> 3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. >> >> Thanks! >> >> Zhipeng >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From helena at openstack.org Tue Jun 9 16:53:50 2020 From: helena at openstack.org (helena at openstack.org) Date: Tue, 9 Jun 2020 12:53:50 -0400 (EDT) Subject: [Starlingx-discuss] StarlingX Glossary In-Reply-To: References: <1591631182.557525360@apps.rackspace.com> Message-ID: <1591721630.566521471@apps.rackspace.com> Hi Bruce, Thank you for sending me the glossary! Yes, we will be using the etherpad to get community feedback and then editing the present glossary accordingly. Cheers, Helena -----Original Message----- From: "Jones, Bruce E" Sent: Monday, June 8, 2020 12:00pm To: "helena at openstack.org" , "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] StarlingX Glossary Hi Helena. I think we already have a good start on this in the StarlingX documentation [1]. I suggest we focus on improving that glossary instead of starting a new one. Brucej [1] [ https://docs.starlingx.io/introduction/terms.html ]( https://docs.starlingx.io/introduction/terms.html ) From: helena at openstack.org Sent: Monday, June 8, 2020 8:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Glossary Greetings StarlingX Community! We are on a mission to create a glossary of StarlingX related terms and want your help! As the community grows and new contributors want to get involved, we hope to have a consistent definition to help familiarize them with the project. Similarly, having a glossary of terms has proven to be a good SEO tactic to gain more web traffic; by creating this glossary, we are hoping to have greater visibility to potential contributors, users, and supporting organizations. This is where you come in! We need your help to define the terms that we can use to educate future contributors Below is an etherpad link. We ask that you add, edit, review, and collaborate on this etherpad to help us make the StarlingX community more accessible and understandable. If you think of more terms to add to the list, please do! As always, feel free to reach out with any questions. Cheers, Helena Spease StarlingX: [ https://etherpad.opendev.org/p/StarlingX_Glossary ]( https://etherpad.opendev.org/p/StarlingX_Glossary ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jun 9 17:54:35 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 9 Jun 2020 17:54:35 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 10, 2020) Message-ID: Hi all, reminder of tomorrow's TSC/Community call. Please feel free to add items to the agenda [0] for the Community call beforehand. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200610T1400 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jun 9 18:36:11 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 9 Jun 2020 20:36:11 +0200 Subject: [Starlingx-discuss] StarlingX confirmation review this Thursday Message-ID: <14E9D19B-7699-4BA2-B169-DFCB7E5A75B1@gmail.com> Hi StarlingX Community, It is a friendly reminder that the OSF Board meeting where we will have the project confirmation review and discussion with the OpenStack Foundation Board of Directors will take place this Thursday (June 11). The StarlingX slot is currently scheduled for 7:45am US Pacific Time. You can find the dial in and meeting details on this wiki: https://wiki.openstack.org/wiki/Governance/Foundation/11June2020BoardMeeting Please let me know if you have any questions. Thanks, Ildikó From ildiko.vancsa at gmail.com Tue Jun 9 19:41:50 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 9 Jun 2020 21:41:50 +0200 Subject: [Starlingx-discuss] PTG recordings Message-ID: <215864BD-8D09-439D-92D5-7F5F87EA84E1@gmail.com> Hi, Here are the links to last week’s PTG recordings with the corresponding passwords: * https://zoom.us/rec/play/6JZ7JOus_T03E4HHtwSDBKR5W43ofKqs0HIe8vBZmEi0AXIBYVHwZbUWZOoqdmCi1TMVaF5q8032Aa6y * Password: 5t%0?%89 * https://zoom.us/rec/play/7pwscuD7rDM3SdeUsgSDUfUqW9W1fa6shCMWr_FfyxuwB3VSYAGuMuMbauLIooiFoOfRE4H73YZjsK8t * Password: 2g=!qIsg * https://zoom.us/rec/play/6Z0vdOj6pjo3E92S4gSDAaJ9W43oeP6s0ScYrvNZzEfmAHYGO1fzYLYRa-JJgWzoFa4qxGvNMt2XU0O9 * Password: 9v at LNk9E Please let me know if you have any issues accessing the videos. Thanks, Ildikó From nicolae.jascanu at intel.com Tue Jun 9 20:38:42 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Tue, 9 Jun 2020 20:38:42 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200608T175940Z Message-ID: Sanity Test from 2020-June-08 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200608T175940Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200608T175940Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) - was not used because it was reserved for regression testing Regards, STX Validation Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Wed Jun 10 03:32:07 2020 From: yong.hu at intel.com (Hu, Yong) Date: Wed, 10 Jun 2020 03:32:07 +0000 Subject: [Starlingx-discuss] Nominate Zhipeng Liu and Kunpeng Zhang as core reviewers for: starlingx/openstack-armada-app and starlingx/upstream Message-ID: <21AA93A8-FC0D-4230-A590-B790914B5679@intel.com> Hi cores, I would like to nominate these 2 guys as core reviewers in following project: starlingx/openstack-armada-app: Zhipeng Liu: zhipeng.liu at intel.com, Kunpeng Zhang: zhang.kunpeng at 99cloud.net starlingx/upstream: Zhipeng Liu: zhipeng.liu at intel.com , Kunpeng Zhang: zhang.kunpeng at 99cloud.net Zhipeng and Kunpeng have been working on StarlingX distro.OpenStack and also on other projects since 2018, with the contributions mainly on OpenStack service upgrades and bug fixing. Besides StarlingX, Zhipeng and Kunpeng have made patches into other OpenStack projects. Up leveling them to core reviewers will enable them to contribute more and have better coverage on the patch review in Asian time zone. So, please let us know your feedback. Thanks!   Regards, Yong From taimoor.imtiaz at intel.com Wed Jun 10 07:32:03 2020 From: taimoor.imtiaz at intel.com (Imtiaz, Taimoor) Date: Wed, 10 Jun 2020 07:32:03 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG Message-ID: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> Hello Folks, I was going through the PTG discussions and etherpads and came across the topic of community and users. Although I’m a new-ish member of the community, I’d like to highlight some things we can also look at: Discussion Forums (Discourse, GitHub Discussions): We are using mailing lists for all discussions today. Most cloud-native projects are using Discourse forums (e.g. Kubernetes, Docker, LXC, LXD, LXCFS, etc. – virtually everyone in this space is part of a Discourse community. I want to double-stress this point actually). GitHub recently announced the Beta of Discussions. If STX is looking to build a community there, Discussions might be a nice, low-cost place to host the community. Besides this, many communities have Slack and Discord teams. But forums are infinitely more discoverable (if we’re not talking about ad-hoc discussions). Participation in other communities + Adoption Stories: We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. Fin’: In my personal experience: I’ve been using and talking about STX for the last 6 months. It is strange that for talking about STX internally, we’re using tools like MS Teams and Slack or Yammer/Discourse/PlanetBlue within our respective companies but the community has a 2nd class experience. In my opinion mailing lists and IRC are not the most modern way of managing large communities for modern, cloud-native projects. I’m sorry if this was already discussed some time ago and this is a repetition (Discourse has cool features to resolve these sorts of discussions btw. 😉) Best Taimoor Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Tue Jun 9 15:49:17 2020 From: yang.liu at windriver.com (Liu, Yang (YOW)) Date: Tue, 9 Jun 2020 15:49:17 +0000 Subject: [Starlingx-discuss] [ Test ] meeting notes - 06/09/2020 Message-ID: Agenda for 6/9/2020 Attendees: Yang, Ruediger, GeorgeP, Mihail, Oliver, Nicolae, Andrew · Sanity Status: * Build issue in week of June 2nd. Today's sanity is ongoing and looking good so far. * There was a discussion in vPTG to add force reboot controller test into sanity - it is currently in progress. ETA: end of this week. · stx4.0 testing: * Feature testing: § https://docs.google.com/spreadsheets/d/1C9n4aRQT7xMyTDCT5sfuZGNI9ermAX5BYRypzcCpQ6U/edit#gid=0 § Centos8: · Two issues/test cases unanswered - test team is not actively working on this. Will move back on this sometime this week. § Ceph - Rook - taken out from stx 4.0 - test activities paused § Upversion Openstack services used by flock components on host: · Test completed - feature spreadsheet updated. § Upgrade Containerized OpenStack to Ussuri (and OpenStack helm rebase) · Planned sanity is completed. IPv6 is not covered. · Suggest to cover all openstack components as a minimum - e.g., nova, neutron, cinder, telemetry, glance, heat, etc · Also should run some automated regression and update automation code if openstack client changed. · Nicolae will contact Zhipeng for latest load for Ussuri § Windows Active directory completed · Testing is completed with small add-on to support multiple-dex § Red fish virtual media support - testing completed § Kubernetes Upgrade Support · Completed - feature spreadsheet updated. § Kata Containers · Kata container test completed · Nicolae to check with designer on this issue: "Check PID namespaces" - ETA: end of this week. § TSN · Still setting it up - complicated setups o Best case scenario: setup complete this week. § B&R with etcd database · Feature testing completed, spreadsheet updated · Weekly based regression is done - simplex system is passing * Regression testing: § Regression started - Both teams are making good progress - will update the regression spreadsheet at end of week. § Some Robot tests don't have proper teardown - manually workaround it. e.g., reset mtu, deleting network, etc, some are affecting rest of the test cases · Long term plan is to switch to pytest § Stability issues encountered · Yang's team: leave host reboot tests to the end. · Nic's team: sometimes have to reinstall system o Saw issue in lock/unlock controller - passes in sanity, but sometimes fails in regression - need to report new LP if encountered again. § Telemetry test cases fail - openstack event list does not work, LP opened. § Will continue with Regression with latest green load - will wait for 0608's sanity results § Regression will be tight if Ussuri merges late · Open topic * Test automation § Need automated installation script · Robot framwork installation scripts are used in daily sanity with basic setup and provisioning - needs libvert and qemu installed o https://opendev.org/starlingx/test/src/branch/master/automated-robot-suite o Nic: Look into adding to Docs/Wiki after stx4.0. § Nic will publish the robot regression test cases to github · https://github.com/starlingx-staging/robot-tests § After stx4.0, put more effort on test automation · Nic: move some robot regression to pytest · Yang: automate new feature test cases * Test in the open § Yang: coming up with VM requirement to send to opendev · Will discuss with networking expert for interface requirement -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jun 10 13:20:37 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Jun 2020 09:20:37 -0400 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: Message-ID: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> CENGN cycles aren't a problem.  People resources is a challenge. So the ask is for a manual build, on CENGN, adding in the nine patches listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). .. and the addition of two repos to the build-stx-base.sh step build-stx-base.sh    --repo local-stx-build,... \    --repo stx-distro,... \    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ Is that correct? Scott On 2020-06-09 9:04 a.m., Saul Wold wrote: > > Frank, Scott, Davelet: > > Are there cycles available on Cengn (and people resources) to do a > Cengn build with the Ussuri patch set applied?  I know this is > different than a branch build.  I think we have done this kind of > thing in the past. > > This might help to make sure we don't have any more Cengn build issues > and could give the Test team a sanity spin with a Ussuri/Cengn build. > > Note there is a comment for Scott/Davelet at the bottom of Zhipeng's > email. > > Thanks >   Sau! > > > On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >> Hi all, >> >> So far, all block issues and concerns have been addressed. >> Since we have passed all sanity test, and Ussuri OpenStack has been >> officially released last month, >> there should be no more reason to block these patches merge. >> >> Next step: >> Let's push to get ussuri upgrade/openstack-helm rebasing patches >> merged. We need great help from core guys! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> >> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >> patch with workflow-1 and add depends-on for other patches as we need >> to merge them together.) >> Upgrade openstack-helm-infra zhipeng liu    >> starlingx/openstack-armada-app       workflow-1 >> Add mariadb database config override to support ipv6 zhipeng liu    >> starlingx/openstack-armada-app >> Fix render error in cinder during openstack-helm rebase zhipeng >> liu    starlingx/openstack-armada-app >> Update download list for openstack-helm upgrade zhipeng liu    >> starlingx/openstack-armada-app >> Update manifest.yaml file for openstack-helm upgrade.                >> zhipeng liu starlingx/openstack-armada-app >> Upgrade openstack-helm zhipeng liu    starlingx/openstack-armada-app >> >> # Below 3 patches is for OpenStack upgrade. >> Update manifest.yaml file for ussuri openstack                      >> YU CHENGDE starlingx/openstack-armada-app >> Modify build-tools and stable-wheels for Ussuri upgrading    YU >> CHENGDE    starlingx/root >> Upgrade openstack docker images for stable/ussuri        YU >> CHENGDE    starlingx/upstream >> >> >> After removing required python3 dependent packages from local, we can >> build out base image and OpenStack service images successfully with >> below command. >> =============================================================================== >> >> @Scott, please help to update cengn build script with below 2 >> additional repos and help to trigger image build >> build-stx-base.sh >>    --repo local-stx-build,... \ >>    --repo stx-distro,... \ >>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >> >> Thanks a lot! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月8日 16:54 >> To: 'Miller, Frank' ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> It is not easy to figure out whether/how/when OpenStack-helm-info >> upstream introduce this issue and then fix it. >> I also could not find any fix in LP[1], which just mentioned that >> this intermittent issue not hit us after some changes in related field. >> >> Anyhow, below 2 patches should fix potential bug and I could not see >> the same error log again in our ussuri upgrade EB. >> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >> state management thread death >> >> Since we have passed fully test, we'd better push to merge ussuri >> upgrade/openstack-helm rebasing patches soon. >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >> >> Thanks! >> Zhipeng >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月5日 22:32 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> This looks promising.  Your theory is that the 2 openstack-helm-infra >> patches will fix the mariadb recovery issues.  These 2 patches were >> merged in the openstack-helm-infra project in January and February of >> 2020.   What would be good to know is what broke mariadb recovery >> between April of 2019 when Chris Friesen finished up his story [1] >> and our current loads today.  The most likely explanation is the >> upversion of Train or the upversion to openstack-helm-infra done in >> November 2019 introduced the mariadb recovery issues.  And then the >> openstack-helm folks found and fixed the issue earlier in 2020. >> >> If we had more time the preferred approach would be to merge just the >> openstack-helm-infra changes first to prove they address mariadb >> recovery and then in a separate commit merge Ussuri.  But since you >> have validated that mariadb recovers with your Ussuri branch and this >> branch has these openstack-helm commits, I support letting Ussuri >> merge into stx.4.0. >> >> Frank >> [1] https://storyboard.openstack.org/#!/story/2004712 >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Friday, June 05, 2020 2:36 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> As for OpenStack not recovering after both controllers are reset [1] >> I could not reproduce this issue with my Ussuri upgrade EB. >> My test step is: >> 1) ssh to standby controller and sudo reboot -f for it. >> 2) sudo reboot -f for activated controller All pods can resume after >> a while. >> >> However, I could reproduce this issue with DB 20200516T080009Z. >>  From error logs,  it is an old issue analyzed by Chris Friesen in >> [2] early last year. >> >> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >> It includes below 2 patches which fixed this stability issue. >> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >> state management thread death >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 22:35 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> This is not a new requirement.  Users expect the software to recover >> when resets occur. >> >> As I had mentioned at the PTG yesterday I know personally that this >> test passed in stx3.0 before the upversion to train. Someone else who >> performs testing can look to determine when this test was done as >> part of feature testing after train was delivered as it should have >> been tested as part of stx.3.0 as well.  I do not know when this >> started to break.  One topic we will discuss at the PTG tomorrow will >> be how to improve our test coverage and automation so this type of >> issue can be found immediately as new code is being delivered. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Wednesday, June 03, 2020 10:28 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Frank, >> >> Have we pass this case before?  Is it a new requirement? >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 22:12 >> To: Miller, Frank ; Liu, ZhipengS >> ; starlingx-discuss at lists.starlingx.io; >> Church, Robert >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Yong/Zhipeng - the LP for openstack not recovering after both >> controllers are reset is >> https://bugs.launchpad.net/starlingx/+bug/1881899 >> >> Ovidiu is investigating and will provide any updates from his >> investigation.  Please continue to keep us informed of your >> investigation. >> >> Frank >> >> -----Original Message----- >> From: Miller, Frank >> Sent: Tuesday, June 02, 2020 10:38 PM >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> We used a build from May 28. >> >> As for the decoupling issue these are actively being worked.  If you >> run the system helm-override-show command when the stx-openstack app >> is applied you won’t see the CLI command fail.  It only fails when >> you try a helm-override-show when the app is in uploaded state.  In >> any case this will be fixed shortly. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Tuesday, June 02, 2020 10:04 PM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Thanks for your quick update! >> Which build are you using to test this case? >> Since decoupling commits introduced several regressions (at least >> 2),  not propose to do this kind of stability test with latest build. >> BTW, do we have plan to revert them considering this stability risk?  >> Our Ussuri upgrade patches is waiting for it☹ >> >> Furthermore, we have not seen this test case that force reboot both >> controllers at the same time. Is it a new requirement?  If not , have >> we pass this case before, which build? >> I'd like to help on it with the pass build for comparative analysis. >> From my point , mariadb might not work if we reboot both controllers. >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 8:55 >> To: Miller, Frank ; Liu, ZhipengS >> ; starlingx-discuss at lists.starlingx.io; >> Church, Robert >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> An update on our testing and analysis today.  We are able to >> reproduce the issue with OpenStack not recovering when we trigger a >> reboot of both AIO controllers at the same time.  This results in >> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >> openstack commands not working indefinitely after the controllers >> recover.  We'll create a launchpad tomorrow to track this issue. >> >> Frank >> >> -----Original Message----- >> From: Miller, Frank >> Sent: Tuesday, June 02, 2020 12:25 PM >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Thanks Zhipeng for the analysis.  What is challenging here is the >> multitude of issues. >> >> In our debug of openstack the past few days we are seeing the app >> fail completely.  After investigation this issue is a Day 1 >> containerd issue.  This is tracked in LP: >> https://bugs.launchpad.net/starlingx/+bug/1881353 >> >> The issue you are seeing on a swact is a new and very recent issue >> tied to the decoupling commits that were merged late last week.  Bob >> is investigating and I expect he'll have a fix soon for that. >> >> But the issues we are most concerned with are when we see mariadb >> crashing and not able to recover or with openstack services not >> working for longer periods of time.  We're attempting to isolate the >> sequence of events that trigger this. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Tuesday, June 02, 2020 11:47 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>              Unable to unlock controller after swact and lock w/ >> openstack applied I also tested with daily build 20200516T080009Z. >> However, it could not be reproduced. >> We should  fix this regression ASAP! >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月2日 16:48 >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank and all, >> >> Update for issue 2. >> I raised a new LP to track it. >> https://bugs.launchpad.net/starlingx/+bug/1881722 >> Below is the time statistics. It seems reasonable. No obvious issue >> found. >> 1) 3~4min for host restart and get ready. >> 2) 2~3min for mariadb terminating, initialization, get ready. (then >> configmap sync is ready) >> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a >> little, as it can retry quickly to connect ovs-vsctl: >> unix:/var/run/openvswitch/db.sock) >> 4) 1min for other pods ready, like neutron-ovs-agent which depends on >> ovs-db. ) Any comment? >> >> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>              Unable to unlock controller after swact and lock w/ >> openstack applied >>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>              system helm-override-show stx-openstack mariadb >> openstack crash  It seems related to openstack plugin decouple >> related patches. Should be a regression. >>   Please see our update in this 2 LPs for detail info.  @Bob, could >> you pls help further check it and your patches, thanks! >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月1日 16:20 >> To: 'Miller, Frank' ; >> 'starlingx-discuss at lists.starlingx.io' >> ; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> I also tested the issue 2 with latest daily build on duplex setup. >> The conclusion is that the issue is there all the time. >> This issue might not be fixed soon, but should not block OpenStack >> upgrade, right? >> >> For 9 OpenStack patches below, I have removed all workflow-1, except >> the first patch and add depends-on all them. >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> Your review and comments are welcome! >> >> As for issue 2, some detail info FYI. >> It also needs to wait for around 10 min before all pods are ready >> again after reboot for master build. >> It stuck on below 2 pods for 10 min. The same as the one I saw with >> my OpenStack upgrade engineering build. >>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >> openvswitch-db) >>       openvswitch-db-8fxkw >> Related key logs below. >>    Warning  FailedMount  2m19s              kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >> failed to sync secret cache: timed out waiting for the condition >>    Warning  FailedMount  2m19s              kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >> sync configmap cache: timed out waiting for the condition >>    Warning  FailedMount  105s               kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >> failed to sync secret cache: timed out waiting for the condition >>    Warning  FailedMount  105s               kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >> sync configmap cache: timed out waiting for the condition >>    Warning  Unhealthy    30s                kubelet, controller-1  >> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >> database connection failed (Permission denied) >>    Warning  Unhealthy    7s                 kubelet, controller-1  >> Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >> database connection failed (Permission denied) >> >> Is it the same stability issue as the one reported from your test >> team?  I can only see this issue after force rebooting. What is our >> expected recovery time? >> Your comment is appreciated! >> >> Thanks! >> Zhipeng >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月29日 9:42 >> To: 'Miller, Frank' ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Glad to see your quick reply!! >> For OpenStack upgrade task, we have finished all test and get patches >> ready for more than 2 weeks, but no any review comments and feedback >> from your side.  What's the next step? >> >> For issue # 2,  in community meeting notes,  I saw that you had some >> stability issue from WR local test team. But so far, I do not see any >> LP for the detail info. You should ask them to do that!  Right? >> >> According to your concern, I tried to reproduce it with my build >> (cherry pick OpenStack upgrade patches)yesterday, and the original >> issue [1] was not seen any more, mariadb got ready quickly, no >> regression. >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月29日 1:07 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Thanks Zhipeng. >> >> Good to see progress on IPv6. >> Waiting for 10 minutes for pods to recover isn't a good result. Is >> there a LP open on this issue?  Which pods are not ready? What can >> you tell us about this 10 minute outage? >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Thursday, May 28, 2020 5:06 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Nicolae already added test case description. Thanks Nicolae! >> >> I also did below test on AIO-DX virtual setup, exactly according to >> your mentioned steps. >> No issue found, but just need to wait for around 10 min before all >> pods are ready again after reboot. >> >> For ipv6 issue, I have submitted new patch for it since dynamic >> override for database config did not work. >>   https://review.opendev.org/#/c/731461/ >>   https://review.opendev.org/#/c/731470/ >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月27日 22:43 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> Thanks for the info.  You have provided the # of testcases but not >> what those testcase do.  Where can I find a description of what the >> OpenStack testcases do? >> >> For the controller reset testcases I'd like to see the test result >> for the following: >> Is openstack usable during the following scenarios on AIO-DX and on >> Standard configurations: >> - Lock/unlock of standby controller >> - reset (ie: reboot -f) of the standby controller >> - reset (ie: reboot -f) of the active controller >> - reapply of stx-openstack after the above scenarios >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Wednesday, May 27, 2020 9:15 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> We have done below tests. >> 1) Sanity tests by Nicolae. >> AIO - Simplex >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             49 TCs [PASS] >> Sanity Platform                 07 TCs [PASS] >> >> TOTAL: [ 61 TCs ] >> >> AIO - Duplex >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 07 TCs [PASS] >> >> TOTAL: [ 64 TCs ] >> >> Standard - Local Storage (2+2) >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 08 TCs [PASS] >> >> TOTAL: [ 65 TCs ] >> >> Standard External - Dedicated Storage (2+2+2) >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 09 TCs [PASS] >> >> TOTAL: [ 66 TCs ] >> >> 2) NFV scenario test by me >>      on duplex/multi standard virtual setup >>            duplex bare metal setup >> ===== Setup >> ================================================================================================================================= >> 2020-05-14 02:30:05.524  Create flavor small >> ........................................ [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >> .............................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_swap >> ................................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >> ......................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium >> ....................................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >> ............................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_swap >> .................................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >> ........................ [OKAY] >> 2020-05-14 02:30:05.653  Create image cirros >> ........................................ [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros >> ....................................... [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >> ............................. [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-swap >> .................................. [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >> ........................ [OKAY] >> 2020-05-14 02:30:05.695  Create volume empty_volume >> ................................. [OKAY] >> 2020-05-14 02:30:05.786  Create network internal >> .................................... [OKAY] >> 2020-05-14 02:30:06.158  Create network external >> .................................... [OKAY] >> 2020-05-14 02:30:06.772  Create subnet internal >> ..................................... [OKAY] >> 2020-05-14 02:30:07.661  Create subnet external >> ..................................... [OKAY] >> 2020-05-14 02:30:08.553  Create instance cirros-1 >> ................................... [OKAY] >> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >> ......................... [OKAY] >> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >> .............................. [OKAY] >> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1  >> .................... [OKAY] >> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >> ............................. [OKAY] >> 2020-05-14 02:31:21.241  Create instance cirros-image-with-volumes-1  >> ................ [OKAY] >> ============================================================================================================================================= >> ===== Test Iteration 0 (single-execution) >> =================================================================================================== >> 2020-05-14 02:33:04.172  Test Instance-Pause >> ........................................ [OKAY]  (2020-05-14 >> 02:33:18.078 Δ=0:00:12.870) >> 2020-05-14 02:33:35.073  Test Instance-Unpause >> ...................................... [OKAY]  (2020-05-14 >> 02:33:41.608 Δ=0:00:05.866) >> 2020-05-14 02:33:53.049  Test Instance-Suspend >> ...................................... [OKAY]  (2020-05-14 >> 02:33:59.546 Δ=0:00:05.792) >> 2020-05-14 02:34:11.103  Test Instance-Resume >> ....................................... [OKAY]  (2020-05-14 >> 02:34:17.756 Δ=0:00:05.937) >> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >> ................................ [OKAY]  (2020-05-14 02:36:45.923 >> Δ=0:02:15.748) >> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >> ................................ [OKAY]  (2020-05-14 02:37:14.504 >> Δ=0:00:11.704) >> 2020-05-14 02:37:30.673  Test Instance-Stop >> ......................................... [OKAY]  (2020-05-14 >> 02:38:44.543 Δ=0:01:13.220) >> 2020-05-14 02:39:00.481  Test Instance-Start >> ........................................ [OKAY]  (2020-05-14 >> 02:39:07.198 Δ=0:00:06.068) >> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >> ................................. [OKAY]  (2020-05-14 02:39:41.692 >> Δ=0:00:22.306) >> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >> ................................. [OKAY]  (2020-05-14 02:41:22.720 >> Δ=0:01:24.179) >> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >> ......................... [OKAY]  (2020-05-14 02:41:45.441 >> Δ=0:00:05.884) >> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >> .......................... [OKAY]  (2020-05-14 02:43:36.381 >> Δ=0:00:21.637) >> 2020-05-14 02:43:52.320  Test Instance-Resize >> ....................................... [OKAY]  (2020-05-14 >> 02:45:16.409 Δ=0:01:22.812) >> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >> ............................... [OKAY]  (2020-05-14 02:45:39.119 >> Δ=0:00:05.777) >> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >> ................................ [OKAY]  (2020-05-14 02:47:30.175 >> Δ=0:00:21.748) >> 2020-05-14 02:47:46.230  Test Instance-Rebuild >> ...................................... [OKAY]  (2020-05-14 >> 02:48:59.762 Δ=0:01:12.980) >> Total-Tests: 16     Execution-Time: 0:16:11.676 >> >> 3) Another 2 test >>      a) Using IPv6 >>           It can pass with workaround now.  I need one more fix for it. >>           In my previous patch https://review.opendev.org/#/c/716524 >> (merged), I dynamically override below >>              config_override: | >>                  [mysqld] >>                  bind_address=:: >>           However, it did not work now. From log,  it shows error >> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >> line: 1'" >>           I tried many methods, but could not remove the first line >> in 20-override.cnf >>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >> 20-override.cnf >>                  |- >>                  [mysqld] >>                  bind_address=:: >>          I can only add it in manifest.yaml as a static override like >> below. >>                 values: >>                    conf: >>                        database: >>                            config_override: | >>                                [mysqld] >>                                bind_address=:: >>                   b) Reset of controllers and check status of >> OpenStack while a controller is rebooting. >>           I have tested it and pass on simplex. >>           For duplex, I have a setup issue in my side. >>           @Jascanu, Nicolae  Could you help me on it for duplex test, >> if you have time today. Thanks! >> >> Zhipeng >> >> >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月26日 21:13 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> Can you publish the list of tests that have been run for openstack? >> >> Also has openstack been tested for the following scenarios: >> 1) Using IPv6 >> 2) Reset of controllers and check status of openstack while a >> controller is rebooting? >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Monday, May 25, 2020 3:14 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> We have passed all sanity test on all setup. Thanks Nicolae!! >> We also built out OpenStack service images from layered build >> environment. >> >> Please help to review and push below patches to be merged, thanks! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >> >> BRs >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月14日 16:49 >> To: 'Saul Wold' ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> Call for patch review again! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月9日 8:38 >> To: Saul Wold ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Agree! >> >> -----Original Message----- >> From: Saul Wold >> Sent: 2020年5月9日 0:29 >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> I would strengthen that to no changes until we get Green Sanity other >> than what's required to make them Green. >> >> Full Stop! >> >> Sau! >> >> >> On 5/8/20 9:05 AM, Miller, Frank wrote: >>> Until we can get sanity passing for several days in a row I strongly >>> suggest we do not allow any further changes into the load related to >>> OpenStack.  Folks can continue with reviews but let’s hold off >>> allowing merges related to a new OpenStack version. >>> >>> Frank >>> >>> *From:*Liu, ZhipengS >>> *Sent:* Friday, May 08, 2020 11:59 AM >>> *To:* starlingx-discuss >>> *Cc:* YU CHENGDE ; Penney, Don >>> >>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi all, >>> >>> Please help to review OpenStack Ussuri upgrade patches. >>> >>> Our target is to get all below patches merged by end of next week. >>> >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>> :merged) >>> >>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>> python3.6 for OpenStack services as ussuri release only support >>> python3. >>> >>> We also rebased openstack-helm/helm-infra to latest version. >>> >>> Engineering build test status. >>> >>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. >>> >>> Thanks! >>> >>> Zhipeng >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed Jun 10 13:23:25 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Jun 2020 09:23:25 -0400 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: Message-ID: <1f303d98-f809-e58c-7eb8-17d9ecb3bd69@windriver.com> I guest the question is where to publish the docker images. github, but with a 'ussuri' element added to the tag ? Scott On 2020-06-09 9:04 a.m., Saul Wold wrote: > > Frank, Scott, Davelet: > > Are there cycles available on Cengn (and people resources) to do a > Cengn build with the Ussuri patch set applied?  I know this is > different than a branch build.  I think we have done this kind of > thing in the past. > > This might help to make sure we don't have any more Cengn build issues > and could give the Test team a sanity spin with a Ussuri/Cengn build. > > Note there is a comment for Scott/Davelet at the bottom of Zhipeng's > email. > > Thanks >   Sau! > > > On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >> Hi all, >> >> So far, all block issues and concerns have been addressed. >> Since we have passed all sanity test, and Ussuri OpenStack has been >> officially released last month, >> there should be no more reason to block these patches merge. >> >> Next step: >> Let's push to get ussuri upgrade/openstack-helm rebasing patches >> merged. We need great help from core guys! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> >> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >> patch with workflow-1 and add depends-on for other patches as we need >> to merge them together.) >> Upgrade openstack-helm-infra zhipeng liu    >> starlingx/openstack-armada-app       workflow-1 >> Add mariadb database config override to support ipv6 zhipeng liu    >> starlingx/openstack-armada-app >> Fix render error in cinder during openstack-helm rebase zhipeng >> liu    starlingx/openstack-armada-app >> Update download list for openstack-helm upgrade zhipeng liu    >> starlingx/openstack-armada-app >> Update manifest.yaml file for openstack-helm upgrade.                >> zhipeng liu starlingx/openstack-armada-app >> Upgrade openstack-helm zhipeng liu    starlingx/openstack-armada-app >> >> # Below 3 patches is for OpenStack upgrade. >> Update manifest.yaml file for ussuri openstack                      >> YU CHENGDE starlingx/openstack-armada-app >> Modify build-tools and stable-wheels for Ussuri upgrading    YU >> CHENGDE    starlingx/root >> Upgrade openstack docker images for stable/ussuri        YU >> CHENGDE    starlingx/upstream >> >> >> After removing required python3 dependent packages from local, we can >> build out base image and OpenStack service images successfully with >> below command. >> =============================================================================== >> >> @Scott, please help to update cengn build script with below 2 >> additional repos and help to trigger image build >> build-stx-base.sh >>    --repo local-stx-build,... \ >>    --repo stx-distro,... \ >>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >> >> Thanks a lot! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月8日 16:54 >> To: 'Miller, Frank' ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> It is not easy to figure out whether/how/when OpenStack-helm-info >> upstream introduce this issue and then fix it. >> I also could not find any fix in LP[1], which just mentioned that >> this intermittent issue not hit us after some changes in related field. >> >> Anyhow, below 2 patches should fix potential bug and I could not see >> the same error log again in our ussuri upgrade EB. >> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >> state management thread death >> >> Since we have passed fully test, we'd better push to merge ussuri >> upgrade/openstack-helm rebasing patches soon. >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >> >> Thanks! >> Zhipeng >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月5日 22:32 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> This looks promising.  Your theory is that the 2 openstack-helm-infra >> patches will fix the mariadb recovery issues.  These 2 patches were >> merged in the openstack-helm-infra project in January and February of >> 2020.   What would be good to know is what broke mariadb recovery >> between April of 2019 when Chris Friesen finished up his story [1] >> and our current loads today.  The most likely explanation is the >> upversion of Train or the upversion to openstack-helm-infra done in >> November 2019 introduced the mariadb recovery issues.  And then the >> openstack-helm folks found and fixed the issue earlier in 2020. >> >> If we had more time the preferred approach would be to merge just the >> openstack-helm-infra changes first to prove they address mariadb >> recovery and then in a separate commit merge Ussuri.  But since you >> have validated that mariadb recovers with your Ussuri branch and this >> branch has these openstack-helm commits, I support letting Ussuri >> merge into stx.4.0. >> >> Frank >> [1] https://storyboard.openstack.org/#!/story/2004712 >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Friday, June 05, 2020 2:36 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Friesen, Chris >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> As for OpenStack not recovering after both controllers are reset [1] >> I could not reproduce this issue with my Ussuri upgrade EB. >> My test step is: >> 1) ssh to standby controller and sudo reboot -f for it. >> 2) sudo reboot -f for activated controller All pods can resume after >> a while. >> >> However, I could reproduce this issue with DB 20200516T080009Z. >>  From error logs,  it is an old issue analyzed by Chris Friesen in >> [2] early last year. >> >> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >> It includes below 2 patches which fixed this stability issue. >> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >> state management thread death >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 22:35 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> This is not a new requirement.  Users expect the software to recover >> when resets occur. >> >> As I had mentioned at the PTG yesterday I know personally that this >> test passed in stx3.0 before the upversion to train. Someone else who >> performs testing can look to determine when this test was done as >> part of feature testing after train was delivered as it should have >> been tested as part of stx.3.0 as well.  I do not know when this >> started to break.  One topic we will discuss at the PTG tomorrow will >> be how to improve our test coverage and automation so this type of >> issue can be found immediately as new code is being delivered. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Wednesday, June 03, 2020 10:28 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Frank, >> >> Have we pass this case before?  Is it a new requirement? >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 22:12 >> To: Miller, Frank ; Liu, ZhipengS >> ; starlingx-discuss at lists.starlingx.io; >> Church, Robert >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Yong/Zhipeng - the LP for openstack not recovering after both >> controllers are reset is >> https://bugs.launchpad.net/starlingx/+bug/1881899 >> >> Ovidiu is investigating and will provide any updates from his >> investigation.  Please continue to keep us informed of your >> investigation. >> >> Frank >> >> -----Original Message----- >> From: Miller, Frank >> Sent: Tuesday, June 02, 2020 10:38 PM >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> We used a build from May 28. >> >> As for the decoupling issue these are actively being worked.  If you >> run the system helm-override-show command when the stx-openstack app >> is applied you won’t see the CLI command fail.  It only fails when >> you try a helm-override-show when the app is in uploaded state.  In >> any case this will be fixed shortly. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Tuesday, June 02, 2020 10:04 PM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Thanks for your quick update! >> Which build are you using to test this case? >> Since decoupling commits introduced several regressions (at least >> 2),  not propose to do this kind of stability test with latest build. >> BTW, do we have plan to revert them considering this stability risk?  >> Our Ussuri upgrade patches is waiting for it☹ >> >> Furthermore, we have not seen this test case that force reboot both >> controllers at the same time. Is it a new requirement?  If not , have >> we pass this case before, which build? >> I'd like to help on it with the pass build for comparative analysis. >> From my point , mariadb might not work if we reboot both controllers. >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年6月3日 8:55 >> To: Miller, Frank ; Liu, ZhipengS >> ; starlingx-discuss at lists.starlingx.io; >> Church, Robert >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> An update on our testing and analysis today.  We are able to >> reproduce the issue with OpenStack not recovering when we trigger a >> reboot of both AIO controllers at the same time.  This results in >> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >> openstack commands not working indefinitely after the controllers >> recover.  We'll create a launchpad tomorrow to track this issue. >> >> Frank >> >> -----Original Message----- >> From: Miller, Frank >> Sent: Tuesday, June 02, 2020 12:25 PM >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Thanks Zhipeng for the analysis.  What is challenging here is the >> multitude of issues. >> >> In our debug of openstack the past few days we are seeing the app >> fail completely.  After investigation this issue is a Day 1 >> containerd issue.  This is tracked in LP: >> https://bugs.launchpad.net/starlingx/+bug/1881353 >> >> The issue you are seeing on a swact is a new and very recent issue >> tied to the decoupling commits that were merged late last week.  Bob >> is investigating and I expect he'll have a fix soon for that. >> >> But the issues we are most concerned with are when we see mariadb >> crashing and not able to recover or with openstack services not >> working for longer periods of time.  We're attempting to isolate the >> sequence of events that trigger this. >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Tuesday, June 02, 2020 11:47 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>              Unable to unlock controller after swact and lock w/ >> openstack applied I also tested with daily build 20200516T080009Z. >> However, it could not be reproduced. >> We should  fix this regression ASAP! >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月2日 16:48 >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Church, Robert >> >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank and all, >> >> Update for issue 2. >> I raised a new LP to track it. >> https://bugs.launchpad.net/starlingx/+bug/1881722 >> Below is the time statistics. It seems reasonable. No obvious issue >> found. >> 1) 3~4min for host restart and get ready. >> 2) 2~3min for mariadb terminating, initialization, get ready. (then >> configmap sync is ready) >> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a >> little, as it can retry quickly to connect ovs-vsctl: >> unix:/var/run/openvswitch/db.sock) >> 4) 1min for other pods ready, like neutron-ovs-agent which depends on >> ovs-db. ) Any comment? >> >> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>              Unable to unlock controller after swact and lock w/ >> openstack applied >>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>              system helm-override-show stx-openstack mariadb >> openstack crash  It seems related to openstack plugin decouple >> related patches. Should be a regression. >>   Please see our update in this 2 LPs for detail info.  @Bob, could >> you pls help further check it and your patches, thanks! >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年6月1日 16:20 >> To: 'Miller, Frank' ; >> 'starlingx-discuss at lists.starlingx.io' >> ; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> I also tested the issue 2 with latest daily build on duplex setup. >> The conclusion is that the issue is there all the time. >> This issue might not be fixed soon, but should not block OpenStack >> upgrade, right? >> >> For 9 OpenStack patches below, I have removed all workflow-1, except >> the first patch and add depends-on all them. >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >> Your review and comments are welcome! >> >> As for issue 2, some detail info FYI. >> It also needs to wait for around 10 min before all pods are ready >> again after reboot for master build. >> It stuck on below 2 pods for 10 min. The same as the one I saw with >> my OpenStack upgrade engineering build. >>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >> openvswitch-db) >>       openvswitch-db-8fxkw >> Related key logs below. >>    Warning  FailedMount  2m19s              kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >> failed to sync secret cache: timed out waiting for the condition >>    Warning  FailedMount  2m19s              kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >> sync configmap cache: timed out waiting for the condition >>    Warning  FailedMount  105s               kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >> failed to sync secret cache: timed out waiting for the condition >>    Warning  FailedMount  105s               kubelet, controller-1  >> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >> sync configmap cache: timed out waiting for the condition >>    Warning  Unhealthy    30s                kubelet, controller-1  >> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >> database connection failed (Permission denied) >>    Warning  Unhealthy    7s                 kubelet, controller-1  >> Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >> database connection failed (Permission denied) >> >> Is it the same stability issue as the one reported from your test >> team?  I can only see this issue after force rebooting. What is our >> expected recovery time? >> Your comment is appreciated! >> >> Thanks! >> Zhipeng >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月29日 9:42 >> To: 'Miller, Frank' ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Glad to see your quick reply!! >> For OpenStack upgrade task, we have finished all test and get patches >> ready for more than 2 weeks, but no any review comments and feedback >> from your side.  What's the next step? >> >> For issue # 2,  in community meeting notes,  I saw that you had some >> stability issue from WR local test team. But so far, I do not see any >> LP for the detail info. You should ask them to do that!  Right? >> >> According to your concern, I tried to reproduce it with my build >> (cherry pick OpenStack upgrade patches)yesterday, and the original >> issue [1] was not seen any more, mariadb got ready quickly, no >> regression. >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月29日 1:07 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Thanks Zhipeng. >> >> Good to see progress on IPv6. >> Waiting for 10 minutes for pods to recover isn't a good result. Is >> there a LP open on this issue?  Which pods are not ready? What can >> you tell us about this 10 minute outage? >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Thursday, May 28, 2020 5:06 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> Nicolae already added test case description. Thanks Nicolae! >> >> I also did below test on AIO-DX virtual setup, exactly according to >> your mentioned steps. >> No issue found, but just need to wait for around 10 min before all >> pods are ready again after reboot. >> >> For ipv6 issue, I have submitted new patch for it since dynamic >> override for database config did not work. >>   https://review.opendev.org/#/c/731461/ >>   https://review.opendev.org/#/c/731470/ >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月27日 22:43 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> Thanks for the info.  You have provided the # of testcases but not >> what those testcase do.  Where can I find a description of what the >> OpenStack testcases do? >> >> For the controller reset testcases I'd like to see the test result >> for the following: >> Is openstack usable during the following scenarios on AIO-DX and on >> Standard configurations: >> - Lock/unlock of standby controller >> - reset (ie: reboot -f) of the standby controller >> - reset (ie: reboot -f) of the active controller >> - reapply of stx-openstack after the above scenarios >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Wednesday, May 27, 2020 9:15 AM >> To: Miller, Frank ; >> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi Frank, >> >> We have done below tests. >> 1) Sanity tests by Nicolae. >> AIO - Simplex >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             49 TCs [PASS] >> Sanity Platform                 07 TCs [PASS] >> >> TOTAL: [ 61 TCs ] >> >> AIO - Duplex >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 07 TCs [PASS] >> >> TOTAL: [ 64 TCs ] >> >> Standard - Local Storage (2+2) >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 08 TCs [PASS] >> >> TOTAL: [ 65 TCs ] >> >> Standard External - Dedicated Storage (2+2+2) >> Setup                                    04 TCs [PASS] >> Provisioning                       01 TCs [PASS] >> Sanity OpenStack             52 TCs [PASS] >> Sanity Platform                 09 TCs [PASS] >> >> TOTAL: [ 66 TCs ] >> >> 2) NFV scenario test by me >>      on duplex/multi standard virtual setup >>            duplex bare metal setup >> ===== Setup >> ================================================================================================================================= >> 2020-05-14 02:30:05.524  Create flavor small >> ........................................ [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >> .............................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_swap >> ................................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >> ......................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium >> ....................................... [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >> ............................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_swap >> .................................. [OKAY] >> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >> ........................ [OKAY] >> 2020-05-14 02:30:05.653  Create image cirros >> ........................................ [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros >> ....................................... [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >> ............................. [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-swap >> .................................. [OKAY] >> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >> ........................ [OKAY] >> 2020-05-14 02:30:05.695  Create volume empty_volume >> ................................. [OKAY] >> 2020-05-14 02:30:05.786  Create network internal >> .................................... [OKAY] >> 2020-05-14 02:30:06.158  Create network external >> .................................... [OKAY] >> 2020-05-14 02:30:06.772  Create subnet internal >> ..................................... [OKAY] >> 2020-05-14 02:30:07.661  Create subnet external >> ..................................... [OKAY] >> 2020-05-14 02:30:08.553  Create instance cirros-1 >> ................................... [OKAY] >> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >> ......................... [OKAY] >> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >> .............................. [OKAY] >> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1  >> .................... [OKAY] >> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >> ............................. [OKAY] >> 2020-05-14 02:31:21.241  Create instance cirros-image-with-volumes-1  >> ................ [OKAY] >> ============================================================================================================================================= >> ===== Test Iteration 0 (single-execution) >> =================================================================================================== >> 2020-05-14 02:33:04.172  Test Instance-Pause >> ........................................ [OKAY]  (2020-05-14 >> 02:33:18.078 Δ=0:00:12.870) >> 2020-05-14 02:33:35.073  Test Instance-Unpause >> ...................................... [OKAY]  (2020-05-14 >> 02:33:41.608 Δ=0:00:05.866) >> 2020-05-14 02:33:53.049  Test Instance-Suspend >> ...................................... [OKAY]  (2020-05-14 >> 02:33:59.546 Δ=0:00:05.792) >> 2020-05-14 02:34:11.103  Test Instance-Resume >> ....................................... [OKAY]  (2020-05-14 >> 02:34:17.756 Δ=0:00:05.937) >> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >> ................................ [OKAY]  (2020-05-14 02:36:45.923 >> Δ=0:02:15.748) >> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >> ................................ [OKAY]  (2020-05-14 02:37:14.504 >> Δ=0:00:11.704) >> 2020-05-14 02:37:30.673  Test Instance-Stop >> ......................................... [OKAY]  (2020-05-14 >> 02:38:44.543 Δ=0:01:13.220) >> 2020-05-14 02:39:00.481  Test Instance-Start >> ........................................ [OKAY]  (2020-05-14 >> 02:39:07.198 Δ=0:00:06.068) >> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >> ................................. [OKAY]  (2020-05-14 02:39:41.692 >> Δ=0:00:22.306) >> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >> ................................. [OKAY]  (2020-05-14 02:41:22.720 >> Δ=0:01:24.179) >> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >> ......................... [OKAY]  (2020-05-14 02:41:45.441 >> Δ=0:00:05.884) >> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >> .......................... [OKAY]  (2020-05-14 02:43:36.381 >> Δ=0:00:21.637) >> 2020-05-14 02:43:52.320  Test Instance-Resize >> ....................................... [OKAY]  (2020-05-14 >> 02:45:16.409 Δ=0:01:22.812) >> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >> ............................... [OKAY]  (2020-05-14 02:45:39.119 >> Δ=0:00:05.777) >> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >> ................................ [OKAY]  (2020-05-14 02:47:30.175 >> Δ=0:00:21.748) >> 2020-05-14 02:47:46.230  Test Instance-Rebuild >> ...................................... [OKAY]  (2020-05-14 >> 02:48:59.762 Δ=0:01:12.980) >> Total-Tests: 16     Execution-Time: 0:16:11.676 >> >> 3) Another 2 test >>      a) Using IPv6 >>           It can pass with workaround now.  I need one more fix for it. >>           In my previous patch https://review.opendev.org/#/c/716524 >> (merged), I dynamically override below >>              config_override: | >>                  [mysqld] >>                  bind_address=:: >>           However, it did not work now. From log,  it shows error >> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >> line: 1'" >>           I tried many methods, but could not remove the first line >> in 20-override.cnf >>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >> 20-override.cnf >>                  |- >>                  [mysqld] >>                  bind_address=:: >>          I can only add it in manifest.yaml as a static override like >> below. >>                 values: >>                    conf: >>                        database: >>                            config_override: | >>                                [mysqld] >>                                bind_address=:: >>                   b) Reset of controllers and check status of >> OpenStack while a controller is rebooting. >>           I have tested it and pass on simplex. >>           For duplex, I have a setup issue in my side. >>           @Jascanu, Nicolae  Could you help me on it for duplex test, >> if you have time today. Thanks! >> >> Zhipeng >> >> >> >> -----Original Message----- >> From: Miller, Frank >> Sent: 2020年5月26日 21:13 >> To: Liu, ZhipengS ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Zhipeng: >> >> Can you publish the list of tests that have been run for openstack? >> >> Also has openstack been tested for the following scenarios: >> 1) Using IPv6 >> 2) Reset of controllers and check status of openstack while a >> controller is rebooting? >> >> Frank >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: Monday, May 25, 2020 3:14 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> We have passed all sanity test on all setup. Thanks Nicolae!! >> We also built out OpenStack service images from layered build >> environment. >> >> Please help to review and push below patches to be merged, thanks! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >> >> BRs >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月14日 16:49 >> To: 'Saul Wold' ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Hi all, >> >> Call for patch review again! >> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >> >> Thanks! >> Zhipeng >> >> -----Original Message----- >> From: Liu, ZhipengS >> Sent: 2020年5月9日 8:38 >> To: Saul Wold ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> Agree! >> >> -----Original Message----- >> From: Saul Wold >> Sent: 2020年5月9日 0:29 >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >> for patch review!! >> >> I would strengthen that to no changes until we get Green Sanity other >> than what's required to make them Green. >> >> Full Stop! >> >> Sau! >> >> >> On 5/8/20 9:05 AM, Miller, Frank wrote: >>> Until we can get sanity passing for several days in a row I strongly >>> suggest we do not allow any further changes into the load related to >>> OpenStack.  Folks can continue with reviews but let’s hold off >>> allowing merges related to a new OpenStack version. >>> >>> Frank >>> >>> *From:*Liu, ZhipengS >>> *Sent:* Friday, May 08, 2020 11:59 AM >>> *To:* starlingx-discuss >>> *Cc:* YU CHENGDE ; Penney, Don >>> >>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi all, >>> >>> Please help to review OpenStack Ussuri upgrade patches. >>> >>> Our target is to get all below patches merged by end of next week. >>> >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>> :merged) >>> >>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>> python3.6 for OpenStack services as ussuri release only support >>> python3. >>> >>> We also rebased openstack-helm/helm-infra to latest version. >>> >>> Engineering build test status. >>> >>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. >>> >>> Thanks! >>> >>> Zhipeng >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Jun 10 13:37:46 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 10 Jun 2020 06:37:46 -0700 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: <1f303d98-f809-e58c-7eb8-17d9ecb3bd69@windriver.com> References: <1f303d98-f809-e58c-7eb8-17d9ecb3bd69@windriver.com> Message-ID: <3704c353-2bf6-13a6-7311-2435cba8aaeb@linux.intel.com> To save an email the answer is yes to the previous email about the ask, would like a build with those nine patches applied. On 6/10/20 6:23 AM, Scott Little wrote: > I guest the question is where to publish the docker images. github, but > with a 'ussuri' element added to the tag ? > That would be great, if they can then be pulled properly, I guess, that might require an additional change somehow. I am not a helm expert on that. Sau! > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build issues >> and could give the Test team a sanity spin with a Ussuri/Cengn build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, >>> there should be no more reason to block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we need >>> to merge them together.) >>> Upgrade openstack-helm-infra zhipeng liu >>> starlingx/openstack-armada-app       workflow-1 >>> Add mariadb database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app >>> Fix render error in cinder during openstack-helm rebase zhipeng >>> liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app >>> Update manifest.yaml file for openstack-helm upgrade. zhipeng liu >>> starlingx/openstack-armada-app >>> Upgrade openstack-helm zhipeng liu    starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app >>> Modify build-tools and stable-wheels for Ussuri upgrading    YU >>> CHENGDE    starlingx/root >>> Upgrade openstack docker images for stable/ussuri        YU >>> CHENGDE    starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we can >>> build out base image and OpenStack service images successfully with >>> below command. >>> =============================================================================== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build >>> build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >>> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >>> state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 openstack-helm-infra >>> patches will fix the mariadb recovery issues.  These 2 patches were >>> merged in the openstack-helm-infra project in January and February of >>> 2020.   What would be good to know is what broke mariadb recovery >>> between April of 2019 when Chris Friesen finished up his story [1] >>> and our current loads today.  The most likely explanation is the >>> upversion of Train or the upversion to openstack-helm-infra done in >>> November 2019 introduced the mariadb recovery issues.  And then the >>> openstack-helm folks found and fixed the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just the >>> openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and this >>> branch has these openstack-helm commits, I support letting Ussuri >>> merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during full >>> Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid >>> state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else who >>> performs testing can look to determine when this test was done as >>> part of feature testing after train was delivered as it should have >>> been tested as part of stx.3.0 as well.  I do not know when this >>> started to break.  One topic we will discuss at the PTG tomorrow will >>> be how to improve our test coverage and automation so this type of >>> issue can be found immediately as new code is being delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked.  If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk? >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement?  If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time.  This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a >>> little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends on >>> ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get patches >>> ready for more than 2 weeks, but no any review comments and feedback >>> from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see any >>> LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             49 TCs [PASS] >>> Sanity Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ================================================================================================================================= >>> >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance cirros-image-with-volumes-1 >>> ................ [OKAY] >>> ============================================================================================================================================= >>> >>> ===== Test Iteration 0 (single-execution) >>> =================================================================================================== >>> >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override like >>> below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex test, >>> if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>> for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity other >>> than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I strongly >>>> suggest we do not allow any further changes into the load related to >>>> OpenStack.  Folks can continue with reviews but let’s hold off >>>> allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ian.Jolliffe at windriver.com Wed Jun 10 14:26:11 2020 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Wed, 10 Jun 2020 14:26:11 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG Message-ID: <98161FF9-DB73-4E3F-A4C8-D15D35A1E0A6@windriver.com> Hello Folks, I was going through the PTG discussions and etherpads and came across the topic of community and users. Although I’m a new-ish member of the community, I’d like to highlight some things we can also look at: Discussion Forums (Discourse, GitHub Discussions): We are using mailing lists for all discussions today. Most cloud-native projects are using Discourse forums (e.g. Kubernetes, Docker, LXC, LXD, LXCFS, etc. – virtually everyone in this space is part of a Discourse community. I want to double-stress this point actually). IJ >> I agree that Discourse is something we should look at – if you join the community call I am sure you would get some feedback. But perhaps it doesn’t work for your timezone. GitHub recently announced the Beta of Discussions. If STX is looking to build a community there, Discussions might be a nice, low-cost place to host the community. Besides this, many communities have Slack and Discord teams. But forums are infinitely more discoverable (if we’re not talking about ad-hoc discussions). Participation in other communities + Adoption Stories: We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. Fin’: In my personal experience: I’ve been using and talking about STX for the last 6 months. It is strange that for talking about STX internally, we’re using tools like MS Teams and Slack or Yammer/Discourse/PlanetBlue within our respective companies but the community has a 2nd class experience. In my opinion mailing lists and IRC are not the most modern way of managing large communities for modern, cloud-native projects. I’m sorry if this was already discussed some time ago and this is a repetition (Discourse has cool features to resolve these sorts of discussions btw. 😉) Best Taimoor Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jun 10 14:56:21 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 10 Jun 2020 14:56:21 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (June 10, 2020) In-Reply-To: References: Message-ID: >From today's call... * Standing Topics * Sanity * Green yesterday * issues earlier due to some Ussuri changes related to Python 3 and building container images - Yong & co. working with Scott on options to test this scenario * Gerrit Reviews in Need of Attention * https://review.opendev.org/#/q/topic:for_ussuri+(status:open) - reviews for OpenStack Ussuri Upgrade * https://review.opendev.org/#/c/731652 - fix for a HIGH LP. * https://review.opendev.org/#/c/728322/ - logmgmt upgrade to python3 * Topics for this Week * follow ups from vPTG * stx.4.0 MS-3 status * Scott's update on "a new way to test you package's dependencies" * http://lists.starlingx.io/pipermail/starlingx-discuss/2020-June/008828.html * Frank will update the guidance on things to check before committing to cite this tool * ARs from Previous Meetings * 5/27 * Build Team mimic what happens in the CENGN build locally * 6/10: in progress * Build Team look into issue with co-dependent commits * 6/10: seems like this is a real issue - workaround is to manually make sure that all co-dependent commits go in together * 5/20 * Saul/Scott review 0514 build break, update learnings/recommendations as appropriate * Scott work on how to make sure there's an ISO whether or not there's a change in the flock layer * 6/10: on to do list * Saul/Ian discuss presenting about StarlingX on one of the TIP open networking group meetings * 6/10: Brent did this! the TIP guys will kick the tires, haven't heard back from them yet * 4/15 * manually updating version info (Build team + Bart) * build team has a plan, see Apr 16 minutes at https://etherpad.opendev.org/p/stx-build * 6/10: this is in progress * follow up with OpenDev re: VM for running SX sanity pending QCOW2 image (Bill, Build) * added an item about QCOW2 image to the build team agenda * 6/10: this in progress * Open Requests for Help * Subcloud on a Virtual Machine (Alfredo Deluca): http://lists.starlingx.io/pipermail/starlingx-discuss/2020-June/008827.html * Bart will respond to Alfredo * ERROR when deploy stx-monitor (Rahmat Agung): http://lists.starlingx.io/pipermail/starlingx-discuss/2020-June/008824.html * Matt will respond to Rahmat From: Zvonar, Bill Sent: Tuesday, June 9, 2020 1:55 PM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (June 10, 2020) Hi all, reminder of tomorrow's TSC/Community call. Please feel free to add items to the agenda [0] for the Community call beforehand. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200610T1400 From Barton.Wensley at windriver.com Wed Jun 10 15:09:48 2020 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Wed, 10 Jun 2020 15:09:48 +0000 Subject: [Starlingx-discuss] Subcloud on a Virtual Machine In-Reply-To: References: Message-ID: Alfredo, We support installing StarlingX in VMs using either KVM or VirtualBox – see the instructions at https://docs.starlingx.io/deploy_install_guides/index.html. We don’t have instructions for installing StarlingX in OpenStack VMs. To do this you would likely want to generate a qcow2 image (using KVM or VirtualBox). I can’t help you with this and based on the lack of response on the list I don’t think others have done this either. If you figure this out it would be great if you could share your findings with the community. Bart From: Alfredo De Luca Sent: June 8, 2020 6:00 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Subcloud on a Virtual Machine Hi all. Any thoughts on this? Also has anyone ever tried this solution with StarlingX on Virtual Machine at all? Cheers On Wed, Jun 3, 2020 at 9:05 PM Alfredo De Luca > wrote: Hi all. For testing purposes we are trying to install a subcloud on a VM (Openstack to be precise) but we get a couple of errors as below. Booting from an ISO (STX 3.0) we get this 1. ERROR: Specified installation (sda) or boot (sda) device is invalid. then I supposed the ISO is looking for a device sda .. so we fixed that but then another issue occurred and the error now is 2. Disk "" given in clearpart command does not exist. Now I wonder if it is possible to install that on top of a VM and also what could it the fix for the second error. Any idea/clue? Cheers -- /Alfredo -- /Alfredo -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Jun 10 15:24:24 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 10 Jun 2020 08:24:24 -0700 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> Message-ID: On 6/10/20 12:32 AM, Imtiaz, Taimoor wrote: > Hello Folks, > > I was going through the PTG discussions and etherpads and came across > the topic of community and users. Although I’m a new-ish member of the > community, I’d like to highlight some things we can also look at: > > *Discussion Forums (Discourse, GitHub Discussions)*: > > We are using mailing lists for all discussions today. Most cloud-native > projects are using Discourse forums (e.g. Kubernetes > , Docker , > LXC, LXD, LXCFS , etc. – virtually > everyone in this space is part of a Discourse community. I want to > double-stress this point actually). > Your welcome to participate in those forums and report back if there are issues, but I don't think we want to maintain 2 communication channels, as I believe Discourse is both a forum and mailing list combined. I know this has come up in the past, StarlingX as part of the OpenStack Foundation chose to use IRC, as you point out below, IRC has been around for a long time and it's used by many, many Open Source project beyond just OpenStack. Please come and participate in the community call [0] on Wednesday mornings. Thanks for your input. Sau! [0] https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call > GitHub recently announced the Beta of Discussions > . > If STX is looking to build a community there, Discussions might be a > nice, low-cost place to host the community. > > Besides this, many communities have Slack and Discord teams. But forums > are infinitely more discoverable (if we’re not talking about ad-hoc > discussions). > > *Participation in other communities + Adoption Stories:* > > We need to be heavily present (announce CVEs, project updates etc.) in > the Kubernetes discussion forums and Slack. > > CNCF has regular posts from adopters of Kubernetes. If we have users who > have adopted STX for their edge, we should invite their architect to > promote their company’s blogpost on CNCF’s blog. I think it’s great > promotion for the user’s product and for the STX community. > > *Fin’:* > > In my personal experience: I’ve been using and talking about STX for the > last 6 months. It is strange that for talking about STX internally, > we’re using tools like MS Teams and Slack or Yammer/Discourse/PlanetBlue > within our respective companies but the community has a 2^nd class > experience. > > In my opinion mailing lists and IRC are not the most modern way of > managing large communities for modern, cloud-native projects. I’m sorry > if this was already discussed some time ago and this is a repetition > (Discourse has cool features to resolve these sorts of discussions btw. 😉) > > Best > > Taimoor > > Intel Deutschland GmbH > Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany > Tel: +49 89 99 8853-0, www.intel.de > Managing Directors: Christin Eisenschmid, Gary Kershaw > Chairperson of the Supervisory Board: Nicole Lau > Registered Office: Munich > Commercial Register: Amtsgericht Muenchen HRB 186928 > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From fungi at yuggoth.org Wed Jun 10 16:01:44 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jun 2020 16:01:44 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> Message-ID: <20200610160144.sgdw7evnrvfw6jna@yuggoth.org> On 2020-06-10 08:24:24 -0700 (-0700), Saul Wold wrote: [...] > Your welcome to participate in those forums and report back if > there are issues, but I don't think we want to maintain 2 > communication channels, as I believe Discourse is both a forum and > mailing list combined. [...] Having struggled repeatedly to interact with Discourse via E-mail, I can say that it's not really a mailing list. It has some features to feed you posts via E-mail and accept replies, but that is where the similarity to a traditional listserv ends. I've been exploring upgrading our Mailman servers the newer 3.x series which enables a lot of Web forum like workflows (via Hyperkitty), but can also say that it turns mailing lists into Web forums about as well as Discourse turns Web forums into mailing lists (that is to say, probably not sufficiently for folks who are seeking a real "Web forum experience"). Personally, I miss Usenet, and wish I had sufficient time to work on adding an NNTP connector for our lists. But the long as short of it is that communities use different tools to communicate, and as someone who participates in lots of diverse communities I've had to learn to do so with a wide variety of tools. Choice of communication tooling is not what makes or breaks a community, and spending too much time jumping back and forth between popular communication platforms of the day serves mostly to eat effort which could otherwise be spent improving software the community is there to produce and maintain. That the Linux kernel developers continue to use mailing lists for discussion, and even for sharing and reviewing Git commits, has not resulted in the death of their community (quite the contrary). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ildiko.vancsa at gmail.com Wed Jun 10 16:18:48 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 10 Jun 2020 18:18:48 +0200 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> Message-ID: <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> Hi Taimoor, I agree with the previous responses regarding the communication tool comments and would reflect on the blog and information sharing topic here. […] > Participation in other communities + Adoption Stories: > We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. > > CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. […] You may not be aware, but on the StarlingX website we have a blog section where we are actively looking for new content: https://www.starlingx.io/blog/ If you or anyone else has an adoption story, demo, or any other cool topic to share details about please share it on the community’s blog. You can add pointers to these blog posts from anywhere including the CNCF sites which helps with further increasing visibility of the project and get new content in front of those who are monitoring the blog for new stories. Anyone can suggest a new post on GitHub in the form of a pull request: https://github.com/StarlingXWeb/starlingx-website/tree/master/src/pages/blog If you need help with putting your blog post together please reach out to me and I’m happy to help reviewing and polishing the text or upload it to GitHub if you have issues with that. Thanks, Ildikó From chris.friesen at windriver.com Wed Jun 10 17:05:30 2020 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 10 Jun 2020 11:05:30 -0600 Subject: [Starlingx-discuss] new docker image referenced by starlingx Message-ID: Hi all, Just a heads-up that with https://review.opendev.org/#/c/731831 merged the initial ansible playbook will try to pull the starlingx/n3000-opae:stx.4.0-v1.0.0 Docker image as listed at https://hub.docker.com/r/starlingx/n3000-opae/tags Anyone using a manually-managed Docker image registry will need to add this image. Thanks, Chris From Matt.Peters at windriver.com Wed Jun 10 17:36:13 2020 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 10 Jun 2020 17:36:13 +0000 Subject: [Starlingx-discuss] ERROR when deploy stx-monitor. In-Reply-To: References: Message-ID: <7172B806-C4DE-4992-AD29-DDC34F76E295@windriver.com> Hi Rahmat, The stx-monitor Armada application is not being actively maintained within since there wasn’t much interest from the community in continuing to support it. The individual container services can still be deployed using Helm on StarlingX if you require. There are also several other projects within the CNCF landscape for monitoring that can also be considered. https://landscape.cncf.io/category=observability-and-analysis&format=card-mode&grouping=category I hope that answers your question. Regards, Matt From: Rahmat Agung Date: Sunday, June 7, 2020 at 10:34 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] ERROR when deploy stx-monitor. I try to deploy stx-monitor on 3 nworker nodes with label like this: ``` worker-3 Ready 2d18h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-client=enabled,elastic-controller=enabled,elastic-data=enabled,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-3,kubernetes.io/os=linux worker-4 Ready 2d18h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-client=enabled,elastic-controller=enabled,elastic-data=enabled,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-4,kubernetes.io/os=linux worker-5 Ready 2d16h v1.16.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,elastic-master=enabled,kubernetes.io/arch=amd64,kubernetes.io/hostname=worker-5,kubernetes.io/os=linux ``` When I check logs: ``` us: <_Rendezvous of RPC that terminated with: status = StatusCode.UNKNOWN details = "release mon-kibana failed: timed out waiting for the condition" debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" > 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller Traceback (most recent call last): 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 473, in install_release 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller metadata=self.metadata) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller details = "release mon-kibana failed: timed out waiting for the condition" 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller > 2020-06-07 14:07:21.196 7963 ERROR armada.handlers.tiller 2020-06-07 14:07:21.199 7963 DEBUG armada.handlers.tiller [-] [chart=kibana]: Helm getting release status for release=mon-kibana, version=0 get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539 2020-06-07 14:07:21.402 7963 DEBUG armada.handlers.tiller [-] [chart=kibana]: GetReleaseStatus= name: "mon-kibana" info { status { code: FAILED } first_deployed { seconds: 1591538240 nanos: 977775758 } last_deployed { seconds: 1591538240 nanos: 977775758 } Description: "Release \"mon-kibana\" failed: timed out waiting for the condition" } namespace: "monitor" get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:547 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada [-] Chart deploy [kibana] failed: armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: mon-kibana - Tiller Message: b'Release "mon-kibana" failed: timed out waiting for the condition' 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada Traceback (most recent call last): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 473, in install_release 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada metadata=self.metadata) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada return _end_unary_response_blocking(state, call, False, None) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada raise _Rendezvous(state, None, None, deadline) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada status = StatusCode.UNKNOWN 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada details = "release mon-kibana failed: timed out waiting for the condition" 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada debug_error_string = "{"created":"@1591538841.195787781","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release mon-kibana failed: timed out waiting for the condition","grpc_status":2}" 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada > 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada During handling of the above exception, another exception occurred: 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada Traceback (most recent call last): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 225, in handle_result 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada result = get_result() 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 236, in 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada if (handle_result(chart, lambda: deploy_chart(chart))): 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 214, in deploy_chart 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada chart, cg_test_all_charts, prefix, known_releases) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 239, in execute 2020-06-07 14:07[402248.574350] serial8250: too much work for irq4 :21.404 7963 ERROR armada.handlers.armada timeout=timer) 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 486, in install_release 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada raise ex.ReleaseException(release, status, 'Install') 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: mon-kibana - Tiller Message: b'Release "mon-kibana" failed: timed out waiting for the condition' 2020-06-07 14:07:21.404 7963 ERROR armada.handlers.armada 2020-06-07 14:07:21.406 7963 ERROR armada.handlers.armada [-] Chart deploy(s) failed: ['kibana'] 2020-06-07 14:07:21.478 7963 INFO armada.handlers.lock [-] Releasing lock 2020-06-07 14:07:21.486 7963 ERROR armada.cli [-] Caught internal exception: armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kibana'] 2020-06-07 14:07:21.486 7963 ERROR armada.cli Traceback (most recent call last): 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke 2020-06-07 14:07:21.486 7963 ERROR armada.cli self.invoke() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 213, in invoke 2020-06-07 14:07:21.486 7963 ERROR armada.cli resp = self.handle(documents, tiller) 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper 2020-06-07 14:07:21.486 7963 ERROR armada.cli return future.result() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2020-06-07 14:07:21.486 7963 ERROR armada.cli return self.__get_result() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2020-06-07 14:07:21.486 7963 ERROR armada.cli raise self._exception 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2020-06-07 14:07:21.486 7963 ERROR armada.cli result = self.fn(*self.args, **self.kwargs) 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 256, in handle 2020-06-07 14:07:21.486 7963 ERROR armada.cli return armada.sync() 2020-06-07 14:07:21.486 7963 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 252, in sync 2020-06-07 14:07:21.486 7963 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2020-06-07 14:07:21.486 7963 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kibana'] 2020-06-07 14:07:21.486 7963 ERROR armada.cli ``` What mean the error above? I just want to know, is stx-monitor stable or still experimental? Because I could not found documentation about it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Wed Jun 10 18:33:54 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 10 Jun 2020 18:33:54 +0000 Subject: [Starlingx-discuss] Weekly Build meeting on Friday at 15:00 UTC Message-ID: For this week only the StarlingX Build meeting is moving to Friday morning: 15:00 UTC 11:00 EST 08:00 PT Etherpad: https://etherpad.openstack.org/p/stx-build Zoom bridge: https://zoom.us/j/342730236 Frank Build PL -------------- next part -------------- An HTML attachment was scrubbed... URL: From taimoor.imtiaz at intel.com Wed Jun 10 18:50:05 2020 From: taimoor.imtiaz at intel.com (Imtiaz, Taimoor) Date: Wed, 10 Jun 2020 18:50:05 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> Message-ID: <9b55496eddfb47178a8023ba59e6ccd5@intel.com> Hi Ildiko, Saul, Sure, I do not disagree that mailing lists are functional. Discourse is so much more welcoming and information is easy to discover. If you monitor a community forum such as Kubernetes', you'll see people having fun too (showing off projects etc.). It's also a bit more realtime and meant for threaded discussions. It was just a suggestion on my end. I do not think we should compare cloud native communities with Linux. The stewards are different generations of folks and mindset are totally different. In my observation, most people do not go through the hassle of registering on mailing lists. They do however like browsing forums (I know I do).. SEO tooling also likes it 😊 I agree with the blog post idea and I'll try to get some users to write those. These things came to mind after listening to the 2nd day's PTG recordings where there was a discussion around community adoption. Best, Taimoor -----Original Message----- From: Ildiko Vancsa Sent: Wednesday, June 10, 2020 18:19 To: Imtiaz, Taimoor Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Adoption discussions in PTG Hi Taimoor, I agree with the previous responses regarding the communication tool comments and would reflect on the blog and information sharing topic here. […] > Participation in other communities + Adoption Stories: > We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. > > CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. […] You may not be aware, but on the StarlingX website we have a blog section where we are actively looking for new content: https://www.starlingx.io/blog/ If you or anyone else has an adoption story, demo, or any other cool topic to share details about please share it on the community’s blog. You can add pointers to these blog posts from anywhere including the CNCF sites which helps with further increasing visibility of the project and get new content in front of those who are monitoring the blog for new stories. Anyone can suggest a new post on GitHub in the form of a pull request: https://github.com/StarlingXWeb/starlingx-website/tree/master/src/pages/blog If you need help with putting your blog post together please reach out to me and I’m happy to help reviewing and polishing the text or upload it to GitHub if you have issues with that. Thanks, Ildikó Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 From bruce.e.jones at intel.com Wed Jun 10 20:05:22 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 10 Jun 2020 20:05:22 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <9b55496eddfb47178a8023ba59e6ccd5@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> <9b55496eddfb47178a8023ba59e6ccd5@intel.com> Message-ID: Taimoor, thank you for sharing your thoughts on these topics. Earlier in the thread you said that we should be participating actively in CNCF communication forums/etc.. - posting news, questions, etc.. I absolutely agree with that, but don't have much time myself to do so. Perhaps someone in the community could volunteer to (or may already) represent the project in those places? brucej -----Original Message----- From: Imtiaz, Taimoor Sent: Wednesday, June 10, 2020 11:50 AM To: Ildiko Vancsa Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Adoption discussions in PTG Hi Ildiko, Saul, Sure, I do not disagree that mailing lists are functional. Discourse is so much more welcoming and information is easy to discover. If you monitor a community forum such as Kubernetes', you'll see people having fun too (showing off projects etc.). It's also a bit more realtime and meant for threaded discussions. It was just a suggestion on my end. I do not think we should compare cloud native communities with Linux. The stewards are different generations of folks and mindset are totally different. In my observation, most people do not go through the hassle of registering on mailing lists. They do however like browsing forums (I know I do).. SEO tooling also likes it 😊 I agree with the blog post idea and I'll try to get some users to write those. These things came to mind after listening to the 2nd day's PTG recordings where there was a discussion around community adoption. Best, Taimoor -----Original Message----- From: Ildiko Vancsa Sent: Wednesday, June 10, 2020 18:19 To: Imtiaz, Taimoor Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Adoption discussions in PTG Hi Taimoor, I agree with the previous responses regarding the communication tool comments and would reflect on the blog and information sharing topic here. […] > Participation in other communities + Adoption Stories: > We need to be heavily present (announce CVEs, project updates etc.) in the Kubernetes discussion forums and Slack. > > CNCF has regular posts from adopters of Kubernetes. If we have users who have adopted STX for their edge, we should invite their architect to promote their company’s blogpost on CNCF’s blog. I think it’s great promotion for the user’s product and for the STX community. […] You may not be aware, but on the StarlingX website we have a blog section where we are actively looking for new content: https://www.starlingx.io/blog/ If you or anyone else has an adoption story, demo, or any other cool topic to share details about please share it on the community’s blog. You can add pointers to these blog posts from anywhere including the CNCF sites which helps with further increasing visibility of the project and get new content in front of those who are monitoring the blog for new stories. Anyone can suggest a new post on GitHub in the form of a pull request: https://github.com/StarlingXWeb/starlingx-website/tree/master/src/pages/blog If you need help with putting your blog post together please reach out to me and I’m happy to help reviewing and polishing the text or upload it to GitHub if you have issues with that. Thanks, Ildikó Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From fungi at yuggoth.org Wed Jun 10 20:14:41 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 10 Jun 2020 20:14:41 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <9b55496eddfb47178a8023ba59e6ccd5@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> <9b55496eddfb47178a8023ba59e6ccd5@intel.com> Message-ID: <20200610201440.t2c334s43qtwpsqk@yuggoth.org> On 2020-06-10 18:50:05 +0000 (+0000), Imtiaz, Taimoor wrote: > Sure, I do not disagree that mailing lists are functional. > Discourse is so much more welcoming and information is easy to > discover. "Welcoming" and "easy to discover" are matters of personal taste, and so differ widely based on individual experience. De gustibus non est disputandum. > If you monitor a community forum such as Kubernetes', you'll see > people having fun too (showing off projects etc.). It's also a bit > more realtime and meant for threaded discussions. E-mail and thus mailing lists are also explicitly designed for threaded discussions, unless you've decided to cripple your communications by using a terrible mail client. My client shows me thread trees of list messages just fine. > It was just a suggestion on my end. I do not think we should > compare cloud native communities with Linux. The stewards are > different generations of folks and mindset are totally different. I hesitate to ascribe ageist generalizations to communication tooling preferences. Are you suggesting that the Linux kernel doesn't have younger developers? Or that Kubernetes doesn't have older developers? What is specific to the Linux maintainer "mindset" which differentiates it from the Kubernetes maintainer "mindset" in this regard? > In my observation, most people do not go through the hassle of > registering on mailing lists. They do however like browsing forums > (I know I do).. I have no problem subscribing to mailing lists, in fact I'm subscribed to many. I much prefer getting messages in my inbox and not having to go check a dozen different Web sites to read new forum posts for discussions in which I'm involved/interested. To be honest, I'd rather not start up a Web browser at all when I can help it. > SEO tooling also likes it 😊 [...] Have any details on this? Popular Web search engines already crawl and index our list archives, and turn up relevant results from them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From nicolae.jascanu at intel.com Wed Jun 10 20:21:55 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Wed, 10 Jun 2020 20:21:55 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200610T020226Z Message-ID: Sanity Test from 2020-June-10 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200610T020226Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200610T020226Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jun 10 20:28:07 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 10 Jun 2020 16:28:07 -0400 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, >>> there should be no more reason to block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) >>> Upgrade openstack-helm-infra zhipeng liu >>> starlingx/openstack-armada-app       workflow-1 >>> Add mariadb database config override to support ipv6 zhipeng liu    >>> starlingx/openstack-armada-app >>> Fix render error in cinder during openstack-helm rebase zhipeng >>> liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app >>> Update manifest.yaml file for openstack-helm upgrade.                >>> zhipeng liu starlingx/openstack-armada-app >>> Upgrade openstack-helm zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack                      >>> YU CHENGDE starlingx/openstack-armada-app >>> Modify build-tools and stable-wheels for Ussuri upgrading YU >>> CHENGDE    starlingx/root >>> Upgrade openstack docker images for stable/ussuri        YU >>> CHENGDE    starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> =============================================================================== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build >>> build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues.  >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability >>> risk?  Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1  >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1  >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1  >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1  >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1  >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1  >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             49 TCs [PASS] >>> Sanity Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) >>> Setup                                    04 TCs [PASS] >>> Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] >>> Sanity Platform                 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ================================================================================================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1  >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ============================================================================================================================================= >>> ===== Test Iteration 0 (single-execution) >>> =================================================================================================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I strongly >>>> suggest we do not allow any further changes into the load related to >>>> OpenStack.  Folks can continue with reviews but let’s hold off >>>> allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Jun 10 21:51:15 2020 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 10 Jun 2020 14:51:15 -0700 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: <68b629aa-9809-2462-e50e-6b2a5e9c71aa@linux.intel.com> Scott, Can you provide details of what you need and what we need to do on Jenkins for Davlet and I to work on it tomorrow when your off-line? I am guessing you need a set of merged branches someplace that we can point jenkins at, but if that needs to be on Cengn or someplace else? Thanks Sau! On 6/10/20 1:28 PM, Scott Little wrote: > Six of the nine updates are in a state of merge conflict. > > Please resolve the conflicts so that I can make progress wit a CENGN build. > > Scott > > > > On 2020-06-10 9:20 a.m., Scott Little wrote: >> CENGN cycles aren't a problem.  People resources is a challenge. >> >> So the ask is for a manual build, on CENGN, adding in the nine patches >> listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). >> >> .. and the addition of two repos to the build-stx-base.sh step >> >> build-stx-base.sh >>    --repo local-stx-build,... \ >>    --repo stx-distro,... \ >>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >> >> >> Is that correct? >> >> Scott >> >> >> On 2020-06-09 9:04 a.m., Saul Wold wrote: >>> >>> Frank, Scott, Davelet: >>> >>> Are there cycles available on Cengn (and people resources) to do a >>> Cengn build with the Ussuri patch set applied?  I know this is >>> different than a branch build.  I think we have done this kind of >>> thing in the past. >>> >>> This might help to make sure we don't have any more Cengn build >>> issues and could give the Test team a sanity spin with a Ussuri/Cengn >>> build. >>> >>> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >>> email. >>> >>> Thanks >>>   Sau! >>> >>> >>> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>>> Hi all, >>>> >>>> So far, all block issues and concerns have been addressed. >>>> Since we have passed all sanity test, and Ussuri OpenStack has been >>>> officially released last month, >>>> there should be no more reason to block these patches merge. >>>> >>>> Next step: >>>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>>> merged. We need great help from core guys! >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>>> >>>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>>> patch with workflow-1 and add depends-on for other patches as we >>>> need to merge them together.) >>>> Upgrade openstack-helm-infra zhipeng liu >>>> starlingx/openstack-armada-app       workflow-1 >>>> Add mariadb database config override to support ipv6 zhipeng liu >>>> starlingx/openstack-armada-app >>>> Fix render error in cinder during openstack-helm rebase zhipeng >>>> liu    starlingx/openstack-armada-app >>>> Update download list for openstack-helm upgrade zhipeng liu >>>> starlingx/openstack-armada-app >>>> Update manifest.yaml file for openstack-helm upgrade. zhipeng liu >>>> starlingx/openstack-armada-app >>>> Upgrade openstack-helm zhipeng liu starlingx/openstack-armada-app >>>> >>>> # Below 3 patches is for OpenStack upgrade. >>>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>>> starlingx/openstack-armada-app >>>> Modify build-tools and stable-wheels for Ussuri upgrading YU >>>> CHENGDE    starlingx/root >>>> Upgrade openstack docker images for stable/ussuri        YU >>>> CHENGDE    starlingx/upstream >>>> >>>> >>>> After removing required python3 dependent packages from local, we >>>> can build out base image and OpenStack service images successfully >>>> with below command. >>>> =============================================================================== >>>> >>>> @Scott, please help to update cengn build script with below 2 >>>> additional repos and help to trigger image build >>>> build-stx-base.sh >>>>    --repo local-stx-build,... \ >>>>    --repo stx-distro,... \ >>>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >>>>    --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>>> >>>> Thanks a lot! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年6月8日 16:54 >>>> To: 'Miller, Frank' ; >>>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> It is not easy to figure out whether/how/when OpenStack-helm-info >>>> upstream introduce this issue and then fix it. >>>> I also could not find any fix in LP[1], which just mentioned that >>>> this intermittent issue not hit us after some changes in related field. >>>> >>>> Anyhow, below 2 patches should fix potential bug and I could not see >>>> the same error log again in our ussuri upgrade EB. >>>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>>> avoid state management thread death >>>> >>>> Since we have passed fully test, we'd better push to merge ussuri >>>> upgrade/openstack-helm rebasing patches soon. >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>>> >>>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>>> >>>> Thanks! >>>> Zhipeng >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月5日 22:32 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> This looks promising.  Your theory is that the 2 >>>> openstack-helm-infra patches will fix the mariadb recovery issues. >>>> These 2 patches were merged in the openstack-helm-infra project in >>>> January and February of 2020.   What would be good to know is what >>>> broke mariadb recovery between April of 2019 when Chris Friesen >>>> finished up his story [1] and our current loads today.  The most >>>> likely explanation is the upversion of Train or the upversion to >>>> openstack-helm-infra done in November 2019 introduced the mariadb >>>> recovery issues.  And then the openstack-helm folks found and fixed >>>> the issue earlier in 2020. >>>> >>>> If we had more time the preferred approach would be to merge just >>>> the openstack-helm-infra changes first to prove they address mariadb >>>> recovery and then in a separate commit merge Ussuri.  But since you >>>> have validated that mariadb recovers with your Ussuri branch and >>>> this branch has these openstack-helm commits, I support letting >>>> Ussuri merge into stx.4.0. >>>> >>>> Frank >>>> [1] https://storyboard.openstack.org/#!/story/2004712 >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Friday, June 05, 2020 2:36 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> As for OpenStack not recovering after both controllers are reset [1] >>>> I could not reproduce this issue with my Ussuri upgrade EB. >>>> My test step is: >>>> 1) ssh to standby controller and sudo reboot -f for it. >>>> 2) sudo reboot -f for activated controller All pods can resume after >>>> a while. >>>> >>>> However, I could reproduce this issue with DB 20200516T080009Z. >>>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>>> [2] early last year. >>>> >>>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>>> It includes below 2 patches which fixed this stability issue. >>>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>>> avoid state management thread death >>>> >>>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月3日 22:35 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> This is not a new requirement.  Users expect the software to recover >>>> when resets occur. >>>> >>>> As I had mentioned at the PTG yesterday I know personally that this >>>> test passed in stx3.0 before the upversion to train. Someone else >>>> who performs testing can look to determine when this test was done >>>> as part of feature testing after train was delivered as it should >>>> have been tested as part of stx.3.0 as well.  I do not know when >>>> this started to break.  One topic we will discuss at the PTG >>>> tomorrow will be how to improve our test coverage and automation so >>>> this type of issue can be found immediately as new code is being >>>> delivered. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Wednesday, June 03, 2020 10:28 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Frank, >>>> >>>> Have we pass this case before?  Is it a new requirement? >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月3日 22:12 >>>> To: Miller, Frank ; Liu, ZhipengS >>>> ; starlingx-discuss at lists.starlingx.io; >>>> Church, Robert >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Yong/Zhipeng - the LP for openstack not recovering after both >>>> controllers are reset is >>>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>>> >>>> Ovidiu is investigating and will provide any updates from his >>>> investigation.  Please continue to keep us informed of your >>>> investigation. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: Tuesday, June 02, 2020 10:38 PM >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> We used a build from May 28. >>>> >>>> As for the decoupling issue these are actively being worked. If you >>>> run the system helm-override-show command when the stx-openstack app >>>> is applied you won’t see the CLI command fail.  It only fails when >>>> you try a helm-override-show when the app is in uploaded state.  In >>>> any case this will be fixed shortly. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Tuesday, June 02, 2020 10:04 PM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> Thanks for your quick update! >>>> Which build are you using to test this case? >>>> Since decoupling commits introduced several regressions (at least >>>> 2),  not propose to do this kind of stability test with latest build. >>>> BTW, do we have plan to revert them considering this stability >>>> risk?  Our Ussuri upgrade patches is waiting for it☹ >>>> >>>> Furthermore, we have not seen this test case that force reboot both >>>> controllers at the same time. Is it a new requirement? If not , have >>>> we pass this case before, which build? >>>> I'd like to help on it with the pass build for comparative analysis. >>>> From my point , mariadb might not work if we reboot both controllers. >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年6月3日 8:55 >>>> To: Miller, Frank ; Liu, ZhipengS >>>> ; starlingx-discuss at lists.starlingx.io; >>>> Church, Robert >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> An update on our testing and analysis today.  We are able to >>>> reproduce the issue with OpenStack not recovering when we trigger a >>>> reboot of both AIO controllers at the same time. This results in >>>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>>> openstack commands not working indefinitely after the controllers >>>> recover.  We'll create a launchpad tomorrow to track this issue. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: Tuesday, June 02, 2020 12:25 PM >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Thanks Zhipeng for the analysis.  What is challenging here is the >>>> multitude of issues. >>>> >>>> In our debug of openstack the past few days we are seeing the app >>>> fail completely.  After investigation this issue is a Day 1 >>>> containerd issue.  This is tracked in LP: >>>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>>> >>>> The issue you are seeing on a swact is a new and very recent issue >>>> tied to the decoupling commits that were merged late last week.  Bob >>>> is investigating and I expect he'll have a fix soon for that. >>>> >>>> But the issues we are most concerned with are when we see mariadb >>>> crashing and not able to recover or with openstack services not >>>> working for longer periods of time.  We're attempting to isolate the >>>> sequence of events that trigger this. >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Tuesday, June 02, 2020 11:47 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>>              Unable to unlock controller after swact and lock w/ >>>> openstack applied I also tested with daily build 20200516T080009Z. >>>> However, it could not be reproduced. >>>> We should  fix this regression ASAP! >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年6月2日 16:48 >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Church, Robert >>>> >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank and all, >>>> >>>> Update for issue 2. >>>> I raised a new LP to track it. >>>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>>> Below is the time statistics. It seems reasonable. No obvious issue >>>> found. >>>> 1) 3~4min for host restart and get ready. >>>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>>> configmap sync is ready) >>>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>>> a little, as it can retry quickly to connect ovs-vsctl: >>>> unix:/var/run/openvswitch/db.sock) >>>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>>> on ovs-db. ) Any comment? >>>> >>>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>>              Unable to unlock controller after swact and lock w/ >>>> openstack applied >>>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>>              system helm-override-show stx-openstack mariadb >>>> openstack crash  It seems related to openstack plugin decouple >>>> related patches. Should be a regression. >>>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>>> you pls help further check it and your patches, thanks! >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年6月1日 16:20 >>>> To: 'Miller, Frank' ; >>>> 'starlingx-discuss at lists.starlingx.io' >>>> ; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> I also tested the issue 2 with latest daily build on duplex setup. >>>> The conclusion is that the issue is there all the time. >>>> This issue might not be fixed soon, but should not block OpenStack >>>> upgrade, right? >>>> >>>> For 9 OpenStack patches below, I have removed all workflow-1, except >>>> the first patch and add depends-on all them. >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>>> Your review and comments are welcome! >>>> >>>> As for issue 2, some detail info FYI. >>>> It also needs to wait for around 10 min before all pods are ready >>>> again after reboot for master build. >>>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>>> my OpenStack upgrade engineering build. >>>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>>> openvswitch-db) >>>>       openvswitch-db-8fxkw >>>> Related key logs below. >>>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>>> failed to sync secret cache: timed out waiting for the condition >>>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>>> sync configmap cache: timed out waiting for the condition >>>>    Warning  FailedMount  105s               kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>>> failed to sync secret cache: timed out waiting for the condition >>>>    Warning  FailedMount  105s               kubelet, controller-1 >>>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>>> sync configmap cache: timed out waiting for the condition >>>>    Warning  Unhealthy    30s                kubelet, controller-1 >>>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>>> database connection failed (Permission denied) >>>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>>> Readiness probe failed: ovs-vsctl: >>>> unix:/var/run/openvswitch/db.sock: database connection failed >>>> (Permission denied) >>>> >>>> Is it the same stability issue as the one reported from your test >>>> team?  I can only see this issue after force rebooting. What is our >>>> expected recovery time? >>>> Your comment is appreciated! >>>> >>>> Thanks! >>>> Zhipeng >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年5月29日 9:42 >>>> To: 'Miller, Frank' ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> Glad to see your quick reply!! >>>> For OpenStack upgrade task, we have finished all test and get >>>> patches ready for more than 2 weeks, but no any review comments and >>>> feedback from your side.  What's the next step? >>>> >>>> For issue # 2,  in community meeting notes,  I saw that you had some >>>> stability issue from WR local test team. But so far, I do not see >>>> any LP for the detail info. You should ask them to do that!  Right? >>>> >>>> According to your concern, I tried to reproduce it with my build >>>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>>> issue [1] was not seen any more, mariadb got ready quickly, no >>>> regression. >>>> >>>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年5月29日 1:07 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Thanks Zhipeng. >>>> >>>> Good to see progress on IPv6. >>>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>>> there a LP open on this issue?  Which pods are not ready? What can >>>> you tell us about this 10 minute outage? >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Thursday, May 28, 2020 5:06 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> Nicolae already added test case description. Thanks Nicolae! >>>> >>>> I also did below test on AIO-DX virtual setup, exactly according to >>>> your mentioned steps. >>>> No issue found, but just need to wait for around 10 min before all >>>> pods are ready again after reboot. >>>> >>>> For ipv6 issue, I have submitted new patch for it since dynamic >>>> override for database config did not work. >>>>   https://review.opendev.org/#/c/731461/ >>>>   https://review.opendev.org/#/c/731470/ >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年5月27日 22:43 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> Thanks for the info.  You have provided the # of testcases but not >>>> what those testcase do.  Where can I find a description of what the >>>> OpenStack testcases do? >>>> >>>> For the controller reset testcases I'd like to see the test result >>>> for the following: >>>> Is openstack usable during the following scenarios on AIO-DX and on >>>> Standard configurations: >>>> - Lock/unlock of standby controller >>>> - reset (ie: reboot -f) of the standby controller >>>> - reset (ie: reboot -f) of the active controller >>>> - reapply of stx-openstack after the above scenarios >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Wednesday, May 27, 2020 9:15 AM >>>> To: Miller, Frank ; >>>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi Frank, >>>> >>>> We have done below tests. >>>> 1) Sanity tests by Nicolae. >>>> AIO - Simplex >>>> Setup                                    04 TCs [PASS] >>>> Provisioning                       01 TCs [PASS] >>>> Sanity OpenStack             49 TCs [PASS] >>>> Sanity Platform                 07 TCs [PASS] >>>> >>>> TOTAL: [ 61 TCs ] >>>> >>>> AIO - Duplex >>>> Setup                                    04 TCs [PASS] >>>> Provisioning                       01 TCs [PASS] >>>> Sanity OpenStack             52 TCs [PASS] >>>> Sanity Platform                 07 TCs [PASS] >>>> >>>> TOTAL: [ 64 TCs ] >>>> >>>> Standard - Local Storage (2+2) >>>> Setup                                    04 TCs [PASS] >>>> Provisioning                       01 TCs [PASS] >>>> Sanity OpenStack             52 TCs [PASS] >>>> Sanity Platform                 08 TCs [PASS] >>>> >>>> TOTAL: [ 65 TCs ] >>>> >>>> Standard External - Dedicated Storage (2+2+2) >>>> Setup                                    04 TCs [PASS] >>>> Provisioning                       01 TCs [PASS] >>>> Sanity OpenStack             52 TCs [PASS] >>>> Sanity Platform                 09 TCs [PASS] >>>> >>>> TOTAL: [ 66 TCs ] >>>> >>>> 2) NFV scenario test by me >>>>      on duplex/multi standard virtual setup >>>>            duplex bare metal setup >>>> ===== Setup >>>> ================================================================================================================================= >>>> >>>> 2020-05-14 02:30:05.524  Create flavor small >>>> ........................................ [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>>> .............................. [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor small_swap >>>> ................................... [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>>> ......................... [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium >>>> ....................................... [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>>> ............................. [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>>> .................................. [OKAY] >>>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>>> ........................ [OKAY] >>>> 2020-05-14 02:30:05.653  Create image cirros >>>> ........................................ [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros >>>> ....................................... [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>>> ............................. [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>>> .................................. [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>>> ........................ [OKAY] >>>> 2020-05-14 02:30:05.695  Create volume empty_volume >>>> ................................. [OKAY] >>>> 2020-05-14 02:30:05.786  Create network internal >>>> .................................... [OKAY] >>>> 2020-05-14 02:30:06.158  Create network external >>>> .................................... [OKAY] >>>> 2020-05-14 02:30:06.772  Create subnet internal >>>> ..................................... [OKAY] >>>> 2020-05-14 02:30:07.661  Create subnet external >>>> ..................................... [OKAY] >>>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>>> ................................... [OKAY] >>>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>>> ......................... [OKAY] >>>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>>> .............................. [OKAY] >>>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>>> .................... [OKAY] >>>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>>> ............................. [OKAY] >>>> 2020-05-14 02:31:21.241  Create instance >>>> cirros-image-with-volumes-1  ................ [OKAY] >>>> ============================================================================================================================================= >>>> >>>> ===== Test Iteration 0 (single-execution) >>>> =================================================================================================== >>>> >>>> 2020-05-14 02:33:04.172  Test Instance-Pause >>>> ........................................ [OKAY]  (2020-05-14 >>>> 02:33:18.078 Δ=0:00:12.870) >>>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>>> ...................................... [OKAY]  (2020-05-14 >>>> 02:33:41.608 Δ=0:00:05.866) >>>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>>> ...................................... [OKAY]  (2020-05-14 >>>> 02:33:59.546 Δ=0:00:05.792) >>>> 2020-05-14 02:34:11.103  Test Instance-Resume >>>> ....................................... [OKAY]  (2020-05-14 >>>> 02:34:17.756 Δ=0:00:05.937) >>>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>>> Δ=0:02:15.748) >>>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>>> Δ=0:00:11.704) >>>> 2020-05-14 02:37:30.673  Test Instance-Stop >>>> ......................................... [OKAY]  (2020-05-14 >>>> 02:38:44.543 Δ=0:01:13.220) >>>> 2020-05-14 02:39:00.481  Test Instance-Start >>>> ........................................ [OKAY]  (2020-05-14 >>>> 02:39:07.198 Δ=0:00:06.068) >>>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>>> Δ=0:00:22.306) >>>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>>> Δ=0:01:24.179) >>>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>>> Δ=0:00:05.884) >>>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>>> Δ=0:00:21.637) >>>> 2020-05-14 02:43:52.320  Test Instance-Resize >>>> ....................................... [OKAY]  (2020-05-14 >>>> 02:45:16.409 Δ=0:01:22.812) >>>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>>> Δ=0:00:05.777) >>>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>>> Δ=0:00:21.748) >>>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>>> ...................................... [OKAY]  (2020-05-14 >>>> 02:48:59.762 Δ=0:01:12.980) >>>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>>> >>>> 3) Another 2 test >>>>      a) Using IPv6 >>>>           It can pass with workaround now.  I need one more fix for it. >>>>           In my previous patch https://review.opendev.org/#/c/716524 >>>> (merged), I dynamically override below >>>>              config_override: | >>>>                  [mysqld] >>>>                  bind_address=:: >>>>           However, it did not work now. From log,  it shows error >>>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>>> line: 1'" >>>>           I tried many methods, but could not remove the first line >>>> in 20-override.cnf >>>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>>> 20-override.cnf >>>>                  |- >>>>                  [mysqld] >>>>                  bind_address=:: >>>>          I can only add it in manifest.yaml as a static override >>>> like below. >>>>                 values: >>>>                    conf: >>>>                        database: >>>>                            config_override: | >>>>                                [mysqld] >>>>                                bind_address=:: >>>>                   b) Reset of controllers and check status of >>>> OpenStack while a controller is rebooting. >>>>           I have tested it and pass on simplex. >>>>           For duplex, I have a setup issue in my side. >>>>           @Jascanu, Nicolae  Could you help me on it for duplex >>>> test, if you have time today. Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Miller, Frank >>>> Sent: 2020年5月26日 21:13 >>>> To: Liu, ZhipengS ; >>>> starlingx-discuss at lists.starlingx.io >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Zhipeng: >>>> >>>> Can you publish the list of tests that have been run for openstack? >>>> >>>> Also has openstack been tested for the following scenarios: >>>> 1) Using IPv6 >>>> 2) Reset of controllers and check status of openstack while a >>>> controller is rebooting? >>>> >>>> Frank >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: Monday, May 25, 2020 3:14 AM >>>> To: starlingx-discuss at lists.starlingx.io >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi all, >>>> >>>> We have passed all sanity test on all setup. Thanks Nicolae!! >>>> We also built out OpenStack service images from layered build >>>> environment. >>>> >>>> Please help to review and push below patches to be merged, thanks! >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>>> >>>> BRs >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年5月14日 16:49 >>>> To: 'Saul Wold' ; >>>> 'starlingx-discuss at lists.starlingx.io' >>>> >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Hi all, >>>> >>>> Call for patch review again! >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status) >>>> >>>> Thanks! >>>> Zhipeng >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS >>>> Sent: 2020年5月9日 8:38 >>>> To: Saul Wold ; >>>> starlingx-discuss at lists.starlingx.io >>>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> Agree! >>>> >>>> -----Original Message----- >>>> From: Saul Wold >>>> Sent: 2020年5月9日 0:29 >>>> To: starlingx-discuss at lists.starlingx.io >>>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>>> Call for patch review!! >>>> >>>> I would strengthen that to no changes until we get Green Sanity >>>> other than what's required to make them Green. >>>> >>>> Full Stop! >>>> >>>> Sau! >>>> >>>> >>>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>>> Until we can get sanity passing for several days in a row I strongly >>>>> suggest we do not allow any further changes into the load related to >>>>> OpenStack.  Folks can continue with reviews but let’s hold off >>>>> allowing merges related to a new OpenStack version. >>>>> >>>>> Frank >>>>> >>>>> *From:*Liu, ZhipengS >>>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>>> *To:* starlingx-discuss >>>>> *Cc:* YU CHENGDE ; Penney, Don >>>>> >>>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>>> for patch review!! >>>>> >>>>> Hi all, >>>>> >>>>> Please help to review OpenStack Ussuri upgrade patches. >>>>> >>>>> Our target is to get all below patches merged by end of next week. >>>>> >>>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+status >>>>> :merged) >>>>> >>>>> During OpenStack upgrade for StarlingX, we have to move python2.7 to >>>>> python3.6 for OpenStack services as ussuri release only support >>>>> python3. >>>>> >>>>> We also rebased openstack-helm/helm-infra to latest version. >>>>> >>>>> Engineering build test status. >>>>> >>>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>>> PASS. >>>>> >>>>> Thanks! >>>>> >>>>> Zhipeng >>>>> >>>>> >>>>> _______________________________________________ >>>>> Starlingx-discuss mailing list >>>>> Starlingx-discuss at lists.starlingx.io >>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From taimoor.imtiaz at intel.com Wed Jun 10 21:52:43 2020 From: taimoor.imtiaz at intel.com (Imtiaz, Taimoor) Date: Wed, 10 Jun 2020 21:52:43 +0000 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <20200610201440.t2c334s43qtwpsqk@yuggoth.org> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> <9b55496eddfb47178a8023ba59e6ccd5@intel.com> <20200610201440.t2c334s43qtwpsqk@yuggoth.org> Message-ID: Hi Jeremy, I didn't mean to ascribe age to communities if that is what it came across as. I meant to say that Brendan Berns (as a steward) is different from Torvalds* and as you said it definitely is a matter of taste. I just think that many newer communities are using these tools. > Have any details on this? Popular Web search engines already crawl and index our list archives, and turn up relevant results from them. I do not actually. I think I meant to say that search engine functionality is built-in. *Of course my impression comes from news sources. Best, Taimoor -----Original Message----- From: Jeremy Stanley Sent: Wednesday, June 10, 2020 22:15 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Adoption discussions in PTG On 2020-06-10 18:50:05 +0000 (+0000), Imtiaz, Taimoor wrote: > Sure, I do not disagree that mailing lists are functional. > Discourse is so much more welcoming and information is easy to > discover. "Welcoming" and "easy to discover" are matters of personal taste, and so differ widely based on individual experience. De gustibus non est disputandum. > If you monitor a community forum such as Kubernetes', you'll see > people having fun too (showing off projects etc.). It's also a bit > more realtime and meant for threaded discussions. E-mail and thus mailing lists are also explicitly designed for threaded discussions, unless you've decided to cripple your communications by using a terrible mail client. My client shows me thread trees of list messages just fine. > It was just a suggestion on my end. I do not think we should compare > cloud native communities with Linux. The stewards are different > generations of folks and mindset are totally different. I hesitate to ascribe ageist generalizations to communication tooling preferences. Are you suggesting that the Linux kernel doesn't have younger developers? Or that Kubernetes doesn't have older developers? What is specific to the Linux maintainer "mindset" which differentiates it from the Kubernetes maintainer "mindset" in this regard? > In my observation, most people do not go through the hassle of > registering on mailing lists. They do however like browsing forums (I > know I do).. I have no problem subscribing to mailing lists, in fact I'm subscribed to many. I much prefer getting messages in my inbox and not having to go check a dozen different Web sites to read new forum posts for discussions in which I'm involved/interested. To be honest, I'd rather not start up a Web browser at all when I can help it. > SEO tooling also likes it 😊 [...] Have any details on this? Popular Web search engines already crawl and index our list archives, and turn up relevant results from them. -- Jeremy Stanley Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 From zhipengs.liu at intel.com Thu Jun 11 01:43:07 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 11 Jun 2020 01:43:07 +0000 Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! In-Reply-To: References: <4dc3f65f-c5cd-c13a-9285-1add73273f7b@windriver.com> Message-ID: Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little Sent: 2020年6月11日 4:28 To: starlingx-discuss at lists.starlingx.io; Liu, ZhipengS Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote: > CENGN cycles aren't a problem.  People resources is a challenge. > > So the ask is for a manual build, on CENGN, adding in the nine patches > listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open). > > .. and the addition of two repos to the build-stx-base.sh step > > build-stx-base.sh >    --repo local-stx-build,... \ >    --repo stx-distro,... \ >    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ >    --repo > ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ > > > Is that correct? > > Scott > > > On 2020-06-09 9:04 a.m., Saul Wold wrote: >> >> Frank, Scott, Davelet: >> >> Are there cycles available on Cengn (and people resources) to do a >> Cengn build with the Ussuri patch set applied?  I know this is >> different than a branch build.  I think we have done this kind of >> thing in the past. >> >> This might help to make sure we don't have any more Cengn build >> issues and could give the Test team a sanity spin with a Ussuri/Cengn >> build. >> >> Note there is a comment for Scott/Davelet at the bottom of Zhipeng's >> email. >> >> Thanks >>   Sau! >> >> >> On 6/9/20 1:39 AM, Liu, ZhipengS wrote: >>> Hi all, >>> >>> So far, all block issues and concerns have been addressed. >>> Since we have passed all sanity test, and Ussuri OpenStack has been >>> officially released last month, there should be no more reason to >>> block these patches merge. >>> >>> Next step: >>> Let's push to get ussuri upgrade/openstack-helm rebasing patches >>> merged. We need great help from core guys! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> # Below 6 patches are for OpenStack-helm/infra rebase. (we set first >>> patch with workflow-1 and add depends-on for other patches as we >>> need to merge them together.) Upgrade openstack-helm-infra zhipeng >>> liu starlingx/openstack-armada-app       workflow-1 Add mariadb >>> database config override to support ipv6 zhipeng liu >>> starlingx/openstack-armada-app Fix render error in cinder during >>> openstack-helm rebase zhipeng liu    starlingx/openstack-armada-app >>> Update download list for openstack-helm upgrade zhipeng liu >>> starlingx/openstack-armada-app Update manifest.yaml file for >>> openstack-helm upgrade. >>> zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm >>> zhipeng liu starlingx/openstack-armada-app >>> >>> # Below 3 patches is for OpenStack upgrade. >>> Update manifest.yaml file for ussuri openstack YU CHENGDE >>> starlingx/openstack-armada-app Modify build-tools and stable-wheels >>> for Ussuri upgrading YU CHENGDE    starlingx/root Upgrade openstack >>> docker images for stable/ussuri        YU CHENGDE    >>> starlingx/upstream >>> >>> >>> After removing required python3 dependent packages from local, we >>> can build out base image and OpenStack service images successfully >>> with below command. >>> ==================================================================== >>> =========== >>> >>> @Scott, please help to update cengn build script with below 2 >>> additional repos and help to trigger image build build-stx-base.sh >>>    --repo local-stx-build,... \ >>>    --repo stx-distro,... \ >>>    --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ >>> \ >>>    --repo >>> ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/ >>> >>> Thanks a lot! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月8日 16:54 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> It is not easy to figure out whether/how/when OpenStack-helm-info >>> upstream introduce this issue and then fix it. >>> I also could not find any fix in LP[1], which just mentioned that >>> this intermittent issue not hit us after some changes in related field. >>> >>> Anyhow, below 2 patches should fix potential bug and I could not see >>> the same error log again in our ussuri upgrade EB. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> Since we have passed fully test, we'd better push to merge ussuri >>> upgrade/openstack-helm rebasing patches soon. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1816842/ >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月5日 22:32 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This looks promising.  Your theory is that the 2 >>> openstack-helm-infra patches will fix the mariadb recovery issues. >>> These 2 patches were merged in the openstack-helm-infra project in >>> January and February of 2020.   What would be good to know is what >>> broke mariadb recovery between April of 2019 when Chris Friesen >>> finished up his story [1] and our current loads today.  The most >>> likely explanation is the upversion of Train or the upversion to >>> openstack-helm-infra done in November 2019 introduced the mariadb >>> recovery issues.  And then the openstack-helm folks found and fixed >>> the issue earlier in 2020. >>> >>> If we had more time the preferred approach would be to merge just >>> the openstack-helm-infra changes first to prove they address mariadb >>> recovery and then in a separate commit merge Ussuri.  But since you >>> have validated that mariadb recovers with your Ussuri branch and >>> this branch has these openstack-helm commits, I support letting >>> Ussuri merge into stx.4.0. >>> >>> Frank >>> [1] https://storyboard.openstack.org/#!/story/2004712 >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Friday, June 05, 2020 2:36 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Friesen, Chris >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> As for OpenStack not recovering after both controllers are reset [1] >>> I could not reproduce this issue with my Ussuri upgrade EB. >>> My test step is: >>> 1) ssh to standby controller and sudo reboot -f for it. >>> 2) sudo reboot -f for activated controller All pods can resume after >>> a while. >>> >>> However, I could reproduce this issue with DB 20200516T080009Z. >>>  From error logs,  it is an old issue analyzed by Chris Friesen in >>> [2] early last year. >>> >>> In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. >>> It includes below 2 patches which fixed this stability issue. >>> https://review.opendev.org/#/c/704034/ Prevent splitbrain during >>> full Galera restart https://review.opendev.org/#/c/708071/ mariadb: >>> avoid state management thread death >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1881899 >>> [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:35 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> This is not a new requirement.  Users expect the software to recover >>> when resets occur. >>> >>> As I had mentioned at the PTG yesterday I know personally that this >>> test passed in stx3.0 before the upversion to train. Someone else >>> who performs testing can look to determine when this test was done >>> as part of feature testing after train was delivered as it should >>> have been tested as part of stx.3.0 as well.  I do not know when >>> this started to break.  One topic we will discuss at the PTG >>> tomorrow will be how to improve our test coverage and automation so >>> this type of issue can be found immediately as new code is being >>> delivered. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, June 03, 2020 10:28 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Frank, >>> >>> Have we pass this case before?  Is it a new requirement? >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 22:12 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Yong/Zhipeng - the LP for openstack not recovering after both >>> controllers are reset is >>> https://bugs.launchpad.net/starlingx/+bug/1881899 >>> >>> Ovidiu is investigating and will provide any updates from his >>> investigation.  Please continue to keep us informed of your >>> investigation. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 10:38 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> We used a build from May 28. >>> >>> As for the decoupling issue these are actively being worked. If you >>> run the system helm-override-show command when the stx-openstack app >>> is applied you won’t see the CLI command fail.  It only fails when >>> you try a helm-override-show when the app is in uploaded state.  In >>> any case this will be fixed shortly. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 10:04 PM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Thanks for your quick update! >>> Which build are you using to test this case? >>> Since decoupling commits introduced several regressions (at least >>> 2),  not propose to do this kind of stability test with latest build. >>> BTW, do we have plan to revert them considering this stability risk?  >>> Our Ussuri upgrade patches is waiting for it☹ >>> >>> Furthermore, we have not seen this test case that force reboot both >>> controllers at the same time. Is it a new requirement? If not , have >>> we pass this case before, which build? >>> I'd like to help on it with the pass build for comparative analysis. >>> From my point , mariadb might not work if we reboot both controllers. >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年6月3日 8:55 >>> To: Miller, Frank ; Liu, ZhipengS >>> ; starlingx-discuss at lists.starlingx.io; >>> Church, Robert >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> An update on our testing and analysis today.  We are able to >>> reproduce the issue with OpenStack not recovering when we trigger a >>> reboot of both AIO controllers at the same time. This results in >>> MariaDB and multiple other OpenStack pods in CrashLoopBackoff and >>> openstack commands not working indefinitely after the controllers >>> recover.  We'll create a launchpad tomorrow to track this issue. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: Tuesday, June 02, 2020 12:25 PM >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng for the analysis.  What is challenging here is the >>> multitude of issues. >>> >>> In our debug of openstack the past few days we are seeing the app >>> fail completely.  After investigation this issue is a Day 1 >>> containerd issue.  This is tracked in LP: >>> https://bugs.launchpad.net/starlingx/+bug/1881353 >>> >>> The issue you are seeing on a swact is a new and very recent issue >>> tied to the decoupling commits that were merged late last week.  Bob >>> is investigating and I expect he'll have a fix soon for that. >>> >>> But the issues we are most concerned with are when we see mariadb >>> crashing and not able to recover or with openstack services not >>> working for longer periods of time.  We're attempting to isolate the >>> sequence of events that trigger this. >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Tuesday, June 02, 2020 11:47 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied I also tested with daily build 20200516T080009Z. >>> However, it could not be reproduced. >>> We should  fix this regression ASAP! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月2日 16:48 >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Church, Robert >>> >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank and all, >>> >>> Update for issue 2. >>> I raised a new LP to track it. >>> https://bugs.launchpad.net/starlingx/+bug/1881722 >>> Below is the time statistics. It seems reasonable. No obvious issue >>> found. >>> 1) 3~4min for host restart and get ready. >>> 2) 2~3min for mariadb terminating, initialization, get ready. (then >>> configmap sync is ready) >>> 3) 2min for ovs-db ready (reduce probe live/ready timer can improve >>> a little, as it can retry quickly to connect ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock) >>> 4) 1min for other pods ready, like neutron-ovs-agent which depends >>> on ovs-db. ) Any comment? >>> >>> For LP https://bugs.launchpad.net/starlingx/+bug/1881454 >>>              Unable to unlock controller after swact and lock w/ >>> openstack applied >>>     And  https://bugs.launchpad.net/starlingx/+bug/1881711 >>>              system helm-override-show stx-openstack mariadb >>> openstack crash  It seems related to openstack plugin decouple >>> related patches. Should be a regression. >>>   Please see our update in this 2 LPs for detail info.  @Bob, could >>> you pls help further check it and your patches, thanks! >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年6月1日 16:20 >>> To: 'Miller, Frank' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> ; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> I also tested the issue 2 with latest daily build on duplex setup. >>> The conclusion is that the issue is there all the time. >>> This issue might not be fixed soon, but should not block OpenStack >>> upgrade, right? >>> >>> For 9 OpenStack patches below, I have removed all workflow-1, except >>> the first patch and add depends-on all them. >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open) >>> Your review and comments are welcome! >>> >>> As for issue 2, some detail info FYI. >>> It also needs to wait for around 10 min before all pods are ready >>> again after reboot for master build. >>> It stuck on below 2 pods for 10 min. The same as the one I saw with >>> my OpenStack upgrade engineering build. >>>       neutron-ovs-agent-controller-0-937646f6-xxznw(depends >>> openvswitch-db) >>>       openvswitch-db-8fxkw >>> Related key logs below. >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  2m19s              kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : >>> failed to sync secret cache: timed out waiting for the condition >>>    Warning  FailedMount  105s               kubelet, controller-1 >>> MountVolume.SetUp failed for volume "openvswitch-bin" : failed to >>> sync configmap cache: timed out waiting for the condition >>>    Warning  Unhealthy    30s                kubelet, controller-1 >>> Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: >>> database connection failed (Permission denied) >>>    Warning  Unhealthy    7s                 kubelet, controller-1 >>> Readiness probe failed: ovs-vsctl: >>> unix:/var/run/openvswitch/db.sock: database connection failed >>> (Permission denied) >>> >>> Is it the same stability issue as the one reported from your test >>> team?  I can only see this issue after force rebooting. What is our >>> expected recovery time? >>> Your comment is appreciated! >>> >>> Thanks! >>> Zhipeng >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月29日 9:42 >>> To: 'Miller, Frank' ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Glad to see your quick reply!! >>> For OpenStack upgrade task, we have finished all test and get >>> patches ready for more than 2 weeks, but no any review comments and >>> feedback from your side.  What's the next step? >>> >>> For issue # 2,  in community meeting notes,  I saw that you had some >>> stability issue from WR local test team. But so far, I do not see >>> any LP for the detail info. You should ask them to do that!  Right? >>> >>> According to your concern, I tried to reproduce it with my build >>> (cherry pick OpenStack upgrade patches)yesterday, and the original >>> issue [1] was not seen any more, mariadb got ready quickly, no >>> regression. >>> >>> [1] https://bugs.launchpad.net/starlingx/+bug/1855474 >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月29日 1:07 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Thanks Zhipeng. >>> >>> Good to see progress on IPv6. >>> Waiting for 10 minutes for pods to recover isn't a good result. Is >>> there a LP open on this issue?  Which pods are not ready? What can >>> you tell us about this 10 minute outage? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Thursday, May 28, 2020 5:06 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> Nicolae already added test case description. Thanks Nicolae! >>> >>> I also did below test on AIO-DX virtual setup, exactly according to >>> your mentioned steps. >>> No issue found, but just need to wait for around 10 min before all >>> pods are ready again after reboot. >>> >>> For ipv6 issue, I have submitted new patch for it since dynamic >>> override for database config did not work. >>>   https://review.opendev.org/#/c/731461/ >>>   https://review.opendev.org/#/c/731470/ >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月27日 22:43 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Thanks for the info.  You have provided the # of testcases but not >>> what those testcase do.  Where can I find a description of what the >>> OpenStack testcases do? >>> >>> For the controller reset testcases I'd like to see the test result >>> for the following: >>> Is openstack usable during the following scenarios on AIO-DX and on >>> Standard configurations: >>> - Lock/unlock of standby controller >>> - reset (ie: reboot -f) of the standby controller >>> - reset (ie: reboot -f) of the active controller >>> - reapply of stx-openstack after the above scenarios >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Wednesday, May 27, 2020 9:15 AM >>> To: Miller, Frank ; >>> starlingx-discuss at lists.starlingx.io; Jascanu, Nicolae >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi Frank, >>> >>> We have done below tests. >>> 1) Sanity tests by Nicolae. >>> AIO - Simplex >>> Setup                                    04 TCs [PASS] Provisioning                       >>> 01 TCs [PASS] Sanity OpenStack             49 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 61 TCs ] >>> >>> AIO - Duplex >>> Setup                                    04 TCs [PASS] Provisioning                       >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 07 TCs [PASS] >>> >>> TOTAL: [ 64 TCs ] >>> >>> Standard - Local Storage (2+2) >>> Setup                                    04 TCs [PASS] Provisioning                       >>> 01 TCs [PASS] Sanity OpenStack             52 TCs [PASS] Sanity >>> Platform                 08 TCs [PASS] >>> >>> TOTAL: [ 65 TCs ] >>> >>> Standard External - Dedicated Storage (2+2+2) Setup                                    >>> 04 TCs [PASS] Provisioning                       01 TCs [PASS] >>> Sanity OpenStack             52 TCs [PASS] Sanity Platform                 >>> 09 TCs [PASS] >>> >>> TOTAL: [ 66 TCs ] >>> >>> 2) NFV scenario test by me >>>      on duplex/multi standard virtual setup >>>            duplex bare metal setup >>> ===== Setup >>> ==================================================================== >>> ============================================================= >>> 2020-05-14 02:30:05.524  Create flavor small >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral >>> .............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_swap >>> ................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor small_ephemeral_swap >>> ......................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.524  Create flavor medium_ephemeral_swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.653  Create image cirros >>> ........................................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros >>> ....................................... [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral >>> ............................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-swap >>> .................................. [OKAY] >>> 2020-05-14 02:30:05.695  Create volume cirros-ephemeral-swap >>> ........................ [OKAY] >>> 2020-05-14 02:30:05.695  Create volume empty_volume >>> ................................. [OKAY] >>> 2020-05-14 02:30:05.786  Create network internal >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.158  Create network external >>> .................................... [OKAY] >>> 2020-05-14 02:30:06.772  Create subnet internal >>> ..................................... [OKAY] >>> 2020-05-14 02:30:07.661  Create subnet external >>> ..................................... [OKAY] >>> 2020-05-14 02:30:08.553  Create instance cirros-1 >>> ................................... [OKAY] >>> 2020-05-14 02:30:29.918  Create instance cirros-ephemeral-1 >>> ......................... [OKAY] >>> 2020-05-14 02:30:43.160  Create instance cirros-swap-1 >>> .............................. [OKAY] >>> 2020-05-14 02:30:56.101  Create instance cirros-ephemeral-swap-1 >>> .................... [OKAY] >>> 2020-05-14 02:31:09.077  Create instance cirros-image-1 >>> ............................. [OKAY] >>> 2020-05-14 02:31:21.241  Create instance >>> cirros-image-with-volumes-1  ................ [OKAY] >>> ==================================================================== >>> ==================================================================== >>> ===== ===== Test Iteration 0 (single-execution) >>> ==================================================================== >>> =============================== >>> 2020-05-14 02:33:04.172  Test Instance-Pause >>> ........................................ [OKAY]  (2020-05-14 >>> 02:33:18.078 Δ=0:00:12.870) >>> 2020-05-14 02:33:35.073  Test Instance-Unpause >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:41.608 Δ=0:00:05.866) >>> 2020-05-14 02:33:53.049  Test Instance-Suspend >>> ...................................... [OKAY]  (2020-05-14 >>> 02:33:59.546 Δ=0:00:05.792) >>> 2020-05-14 02:34:11.103  Test Instance-Resume >>> ....................................... [OKAY]  (2020-05-14 >>> 02:34:17.756 Δ=0:00:05.937) >>> 2020-05-14 02:34:29.269  Test Instance-Reboot (soft) >>> ................................ [OKAY]  (2020-05-14 02:36:45.923 >>> Δ=0:02:15.748) >>> 2020-05-14 02:37:02.160  Test Instance-Reboot (hard) >>> ................................ [OKAY]  (2020-05-14 02:37:14.504 >>> Δ=0:00:11.704) >>> 2020-05-14 02:37:30.673  Test Instance-Stop >>> ......................................... [OKAY]  (2020-05-14 >>> 02:38:44.543 Δ=0:01:13.220) >>> 2020-05-14 02:39:00.481  Test Instance-Start >>> ........................................ [OKAY]  (2020-05-14 >>> 02:39:07.198 Δ=0:00:06.068) >>> 2020-05-14 02:39:18.578  Test Instance-Live-Migrate >>> ................................. [OKAY]  (2020-05-14 02:39:41.692 >>> Δ=0:00:22.306) >>> 2020-05-14 02:39:57.927  Test Instance-Cold-Migrate >>> ................................. [OKAY]  (2020-05-14 02:41:22.720 >>> Δ=0:01:24.179) >>> 2020-05-14 02:41:38.995  Test Instance-Cold-Migrate-Confirm >>> ......................... [OKAY]  (2020-05-14 02:41:45.441 >>> Δ=0:00:05.884) >>> 2020-05-14 02:41:57.108  Test Instance-Cold-Migrate-Revert >>> .......................... [OKAY]  (2020-05-14 02:43:36.381 >>> Δ=0:00:21.637) >>> 2020-05-14 02:43:52.320  Test Instance-Resize >>> ....................................... [OKAY]  (2020-05-14 >>> 02:45:16.409 Δ=0:01:22.812) >>> 2020-05-14 02:45:32.723  Test Instance-Resize-Confirm >>> ............................... [OKAY]  (2020-05-14 02:45:39.119 >>> Δ=0:00:05.777) >>> 2020-05-14 02:45:50.437  Test Instance-Resize-Revert >>> ................................ [OKAY]  (2020-05-14 02:47:30.175 >>> Δ=0:00:21.748) >>> 2020-05-14 02:47:46.230  Test Instance-Rebuild >>> ...................................... [OKAY]  (2020-05-14 >>> 02:48:59.762 Δ=0:01:12.980) >>> Total-Tests: 16     Execution-Time: 0:16:11.676 >>> >>> 3) Another 2 test >>>      a) Using IPv6 >>>           It can pass with workaround now.  I need one more fix for it. >>>           In my previous patch https://review.opendev.org/#/c/716524 >>> (merged), I dynamically override below >>>              config_override: | >>>                  [mysqld] >>>                  bind_address=:: >>>           However, it did not work now. From log,  it shows error >>> "OpenStack-Helm Mariadb - INFO - b'error: Found option without >>> preceding group in config file: /etc/mysql/conf.d/20-override.cnf at >>> line: 1'" >>>           I tried many methods, but could not remove the first line >>> in 20-override.cnf >>>                  mysql at mariadb-server-0:/etc/mysql/conf.d$ cat >>> 20-override.cnf >>>                  |- >>>                  [mysqld] >>>                  bind_address=:: >>>          I can only add it in manifest.yaml as a static override >>> like below. >>>                 values: >>>                    conf: >>>                        database: >>>                            config_override: | >>>                                [mysqld] >>>                                bind_address=:: >>>                   b) Reset of controllers and check status of >>> OpenStack while a controller is rebooting. >>>           I have tested it and pass on simplex. >>>           For duplex, I have a setup issue in my side. >>>           @Jascanu, Nicolae  Could you help me on it for duplex >>> test, if you have time today. Thanks! >>> >>> Zhipeng >>> >>> >>> >>> -----Original Message----- >>> From: Miller, Frank >>> Sent: 2020年5月26日 21:13 >>> To: Liu, ZhipengS ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Zhipeng: >>> >>> Can you publish the list of tests that have been run for openstack? >>> >>> Also has openstack been tested for the following scenarios: >>> 1) Using IPv6 >>> 2) Reset of controllers and check status of openstack while a >>> controller is rebooting? >>> >>> Frank >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: Monday, May 25, 2020 3:14 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> We have passed all sanity test on all setup. Thanks Nicolae!! >>> We also built out OpenStack service images from layered build >>> environment. >>> >>> Please help to review and push below patches to be merged, thanks! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> BRs >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月14日 16:49 >>> To: 'Saul Wold' ; >>> 'starlingx-discuss at lists.starlingx.io' >>> >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Hi all, >>> >>> Call for patch review again! >>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat >>> us) >>> >>> Thanks! >>> Zhipeng >>> >>> -----Original Message----- >>> From: Liu, ZhipengS >>> Sent: 2020年5月9日 8:38 >>> To: Saul Wold ; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> Agree! >>> >>> -----Original Message----- >>> From: Saul Wold >>> Sent: 2020年5月9日 0:29 >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] >>> Call for patch review!! >>> >>> I would strengthen that to no changes until we get Green Sanity >>> other than what's required to make them Green. >>> >>> Full Stop! >>> >>> Sau! >>> >>> >>> On 5/8/20 9:05 AM, Miller, Frank wrote: >>>> Until we can get sanity passing for several days in a row I >>>> strongly suggest we do not allow any further changes into the load >>>> related to OpenStack.  Folks can continue with reviews but let’s >>>> hold off allowing merges related to a new OpenStack version. >>>> >>>> Frank >>>> >>>> *From:*Liu, ZhipengS >>>> *Sent:* Friday, May 08, 2020 11:59 AM >>>> *To:* starlingx-discuss >>>> *Cc:* YU CHENGDE ; Penney, Don >>>> >>>> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call >>>> for patch review!! >>>> >>>> Hi all, >>>> >>>> Please help to review OpenStack Ussuri upgrade patches. >>>> >>>> Our target is to get all below patches merged by end of next week. >>>> >>>> https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta >>>> tus >>>> :merged) >>>> >>>> During OpenStack upgrade for StarlingX, we have to move python2.7 >>>> to >>>> python3.6 for OpenStack services as ussuri release only support >>>> python3. >>>> >>>> We also rebased openstack-helm/helm-infra to latest version. >>>> >>>> Engineering build test status. >>>> >>>>   1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. >>>>   2. nfv_scenario_tests PASS on simplex bare metal setup. >>>>   3. Sanity test is ongoing.   Duplex/standard virtual setup test >>>> PASS. >>>> >>>> Thanks! >>>> >>>> Zhipeng >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From haochuan.z.chen at intel.com Thu Jun 11 02:53:06 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Thu, 11 Jun 2020 02:53:06 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: , Message-ID: Hi voiculeasa I confirm backup and restore works without ceph backend. This issue is caused with my improper provision step. BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan Sent: Tuesday, June 9, 2020 5:54 PM To: Chen, Haochuan Z ; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello Martin, I didn't encounter that issue when testing, but also I didn't test recently without ceph backend. Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph? Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Monday, June 8, 2020 5:21 AM To: Voiculeasa, Dan >; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.chengde at 99cloud.net Thu Jun 11 03:21:39 2020 From: yu.chengde at 99cloud.net (YuChengDe) Date: Thu, 11 Jun 2020 11:21:39 +0800 (GMT+08:00) Subject: [Starlingx-discuss] =?utf-8?q?=5Bpytest=5D_Please_teach_me_how_to?= =?utf-8?q?_use_pytest_on_stx-openstack?= Message-ID: Hello: I am going to testing our stx-openstack through starlingx/test https://opendev.org/starlingx/test/src/branch/r/stx.3.0 May I ask for some tutorial and testing example? Many thanks. -- ————————————————————————————— 九州云信息科技有限公司 99CLOUD Inc. 于成德 产品开发部 邮箱(Email): yu.chengde at 99cloud.net 手机(Mobile): 13816965096 地址(Addr): 上海市局门路427号1号楼206 Room 206, Bldg 1, No.427 JuMen Road, ShangHai, China 网址(Site): http://www.99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.chengde at 99cloud.net Thu Jun 11 03:22:11 2020 From: yu.chengde at 99cloud.net (YuChengDe) Date: Thu, 11 Jun 2020 11:22:11 +0800 (GMT+08:00) Subject: [Starlingx-discuss] =?utf-8?q?=5Bpytest=5D_Please_teach_me_how_to?= =?utf-8?q?_use_pytest_on_stx-openstack?= Message-ID: Hello: I am going to testing our stx-openstack through starlingx/test https://opendev.org/starlingx/test/src/branch/r/stx.3.0 May I ask for some tutorial and testing example? Many thanks. -- ————————————————————————————— 九州云信息科技有限公司 99CLOUD Inc. 于成德 产品开发部 邮箱(Email): yu.chengde at 99cloud.net 手机(Mobile): 13816965096 地址(Addr): 上海市局门路427号1号楼206 Room 206, Bldg 1, No.427 JuMen Road, ShangHai, China 网址(Site): http://www.99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Jun 11 09:06:48 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 11 Jun 2020 11:06:48 +0200 Subject: [Starlingx-discuss] StarlingX Adoption discussions in PTG In-Reply-To: <9b55496eddfb47178a8023ba59e6ccd5@intel.com> References: <4bfaf6b353894e0b9786b47f6dc08c7e@intel.com> <4A67A66F-4A58-477B-8520-AA85F7B5FE93@gmail.com> <9b55496eddfb47178a8023ba59e6ccd5@intel.com> Message-ID: <29D88630-EFDD-40BD-AA7A-F30761359F77@gmail.com> […] > I agree with the blog post idea and I'll try to get some users to write those. […] Sounds great! Please let me know if you need any help throughout the process. Thanks, Ildikó From maryx.camp at intel.com Thu Jun 11 14:38:47 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 11 Jun 2020 14:38:47 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 2020-06-10 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Our tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 2020-06-10  . All -- reviews merged since last meeting:  1 . All -- bug status -- 5 total, 2 WIP o [ww23] Fix search function & Add instructions for building stx-openstack application [not started] o [ww20] Networking documentation [not started] o [ww17] Debug guide [WIP]  o [ww16] Build Avoidance [WIP] https://docs.starlingx.io/developer_resources/build_guide.html#build-avoidance) . Reviews in progress:    o Chinese document for layered build https://review.opendev.org/#/c/726737/  o TSN in Kata containers - [WIP] Mary's clerical edits. o Rook migration - Martin Chen author - orig review is merged. AR Mary to do clerical edits.  o Modifying layered build commands (add pike / remove pike)  This review is valid for the current situation: https://review.opendev.org/#/c/717424/  . All -- Opens o Bart explained the reviews from Andreas Jaeger which updated the openstackdocs theme: https://review.opendev.org/#/c/733576/ and https://review.opendev.org/#/c/733566/  The "submit a bug link" on the doc pages points to LP now, hooray! o Greg sent an email suggestion to provide an alternate method for accessing openstack with the local CLI. AR Mary update https://docs.starlingx.io/deploy_install_guides/r4_release/openstack/access.html#local-cli o Poornima joined to ask for reviewers on the Layered Build guide: https://review.opendev.org/#/c/733048/9 From ildiko.vancsa at gmail.com Thu Jun 11 15:37:40 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 11 Jun 2020 17:37:40 +0200 Subject: [Starlingx-discuss] StarlingX is a new confirmed Open Infrastructure project!! Message-ID: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> Hi StarlingX Community, I’m reaching out to you to share the good news that the Board of Directors of the OpenStack Foundation has just approved to confirm StarlingX as a new top-level Open Infrastructure project supported by OSF[1]. Hereby I would like to congratulate to the community for all the hard work and achievements during the pilot phase and looking forward to continue working with you to shape both the community and the platform to achieve further successes! I would also like to thank Ian and Saul who took on the task to present to the Board today, they did an amazing job to talk about the first two years of the project. Thanks, Ildikó [1] https://www.openstack.org/news/view/454/starlingx-confirmed-as-toplevel-osf-project From glenn.seiler at windriver.com Thu Jun 11 16:00:11 2020 From: glenn.seiler at windriver.com (Seiler, Glenn) Date: Thu, 11 Jun 2020 16:00:11 +0000 Subject: [Starlingx-discuss] StarlingX is a new confirmed Open Infrastructure project!! In-Reply-To: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> References: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> Message-ID: Congratulations to everyone who has participated in this great project over the past two years. This is a fantastic achievement. I know StarlingX is going to continue to prosper and grow. -glenn ________________________________ From: Ildiko Vancsa Sent: Thursday, June 11, 2020 8:37:40 AM To: starlingx Subject: [Starlingx-discuss] StarlingX is a new confirmed Open Infrastructure project!! Hi StarlingX Community, I’m reaching out to you to share the good news that the Board of Directors of the OpenStack Foundation has just approved to confirm StarlingX as a new top-level Open Infrastructure project supported by OSF[1]. Hereby I would like to congratulate to the community for all the hard work and achievements during the pilot phase and looking forward to continue working with you to shape both the community and the platform to achieve further successes! I would also like to thank Ian and Saul who took on the task to present to the Board today, they did an amazing job to talk about the first two years of the project. Thanks, Ildikó [1] https://www.openstack.org/news/view/454/starlingx-confirmed-as-toplevel-osf-project _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.a.cobbley at intel.com Thu Jun 11 16:06:28 2020 From: david.a.cobbley at intel.com (Cobbley, David A) Date: Thu, 11 Jun 2020 16:06:28 +0000 Subject: [Starlingx-discuss] StarlingX is a new confirmed Open Infrastructure project!! In-Reply-To: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> References: <9A23B5D1-B307-4057-8174-0CA1AADC2E52@gmail.com> Message-ID: <95CAB4F4-F045-4E0B-B5FE-912EB2AD4C75@intel.com> This is wonderful news, and from where the project started, was not easy to achieve. It is a testament to the passion and dedication of the team that the project has reached this level and overcome several challenges along the way. Congratulations! --David Cobbley On 6/11/20, 8:39 AM, "Ildiko Vancsa" wrote: Hi StarlingX Community, I’m reaching out to you to share the good news that the Board of Directors of the OpenStack Foundation has just approved to confirm StarlingX as a new top-level Open Infrastructure project supported by OSF[1]. Hereby I would like to congratulate to the community for all the hard work and achievements during the pilot phase and looking forward to continue working with you to shape both the community and the platform to achieve further successes! I would also like to thank Ian and Saul who took on the task to present to the Board today, they did an amazing job to talk about the first two years of the project. Thanks, Ildikó [1] https://www.openstack.org/news/view/454/starlingx-confirmed-as-toplevel-osf-project _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Thu Jun 11 17:24:41 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 13:24:41 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 225 - Failure! Message-ID: <1122549494.1633.1591896282389.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 225 Status: Failure Timestamp: 20200611T172331Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200611T142734Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs MASTER_BUILD_NUMBER: 2 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200611T142734Z DOCKER_BUILD_ID: jenkins-ussuri-20200611T142734Z-builder TIMESTAMP: 20200611T142734Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs From build.starlingx at gmail.com Thu Jun 11 17:24:43 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 13:24:43 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 2 - Failure! Message-ID: <448997430.1636.1591896284713.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 2 Status: Failure Timestamp: 20200611T142734Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: true From sgw at linux.intel.com Thu Jun 11 17:53:33 2020 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 11 Jun 2020 10:53:33 -0700 Subject: [Starlingx-discuss] Ussuri Test build failed Message-ID: Zhipeng, Looks like there is a missing dependency issue with the Ussuri build see the logs [0]. Summary of Errors: Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libaprutil-1.so.0()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: system-logos >= 7.92.1-1 Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libaprutil-1.so.0()(64bit) Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) Requires: scl-utils Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libjansson.so.4()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libapr-1.so.0()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libapr-1.so.0()(64bit) Error: Package: rh-python36-runtime-2.0-1.el7.x86_64 (ussuri-wsgi) Requires: scl-utils Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: /etc/mime.types Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Please take a look into this. Thanks Sau! [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_base_image-238.log.html From maryx.camp at intel.com Thu Jun 11 18:13:06 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Thu, 11 Jun 2020 18:13:06 +0000 Subject: [Starlingx-discuss] issue for backup and restore In-Reply-To: References: , Message-ID: Hi Martin and Dan, The Backup and restore guide review has just merged in the StarlingX documentation. Please have a look at the guide here: https://docs.starlingx.io/developer_resources/backup_restore.html If I can fix the guide to be more clear and prevent errors, please open a Launchpad by clicking the "bug" button or submit a review with changes. Thanks in advance for your feedback to improve the STX documentation, Mary Camp PTIGlobal Technical Writer | maryx.camp at intel.com From: Chen, Haochuan Z Sent: Wednesday, June 10, 2020 10:53 PM To: Voiculeasa, Dan ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa I confirm backup and restore works without ceph backend. This issue is caused with my improper provision step. BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 9, 2020 5:54 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello Martin, I didn't encounter that issue when testing, but also I didn't test recently without ceph backend. Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph? Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Monday, June 8, 2020 5:21 AM To: Voiculeasa, Dan >; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] issue for backup and restore Hi voiculeasa When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex. Restore process $ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz" $ source /etc/platform/openrc $ system host-unlock 1 u'9\nTraceback (most recent call last):\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n **args)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n self._configure_controller_host(context, host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n self._puppet.update_host_config(host)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n func(self, *args, **kwargs)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n config.update(puppet_plugin.obj.get_host_config(host))\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n generate_driver_config(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n generate_mlx4_core_options(context, config)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n num_vfs_options = build_mlx4_num_vfs_options(context)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n port = get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n return interface.get_interface_port(context, iface)\n\n File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n' [sysadmin at localhost playbooks(keystone_admin)]$ Thanks Martin, Chen IOTG, Software Engineer 021-61164330 From: Voiculeasa, Dan > Sent: Tuesday, June 2, 2020 9:23 PM To: Chen, Haochuan Z >; starlingx-discuss at lists.starlingx.io Subject: Re: issue for backup and restore Hello, What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say? If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours]. Thanks, Dan Voiculeasa ________________________________ From: Chen, Haochuan Z > Sent: Sunday, May 24, 2020 4:08 PM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] issue for backup and restore Hi I follow this guide to check backup and restore https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst But when I run this command to restore the system, it will fail with such error log. sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz" TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] ******************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]} Any idea about this. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Thu Jun 11 18:43:10 2020 From: yang.liu at windriver.com (Liu, Yang (YOW)) Date: Thu, 11 Jun 2020 18:43:10 +0000 Subject: [Starlingx-discuss] [pytest] Please teach me how to use pytest on stx-openstack In-Reply-To: References: Message-ID: Hi Chengde, You can start with the training video in following share drive: https://drive.google.com/drive/folders/1AvUCq3ojuhNZV6XE8YdRhp9PVxixRIeE Cheers, Yang From: YuChengDe [mailto:yu.chengde at 99cloud.net] Sent: June-10-20 11:22 PM To: starlingx-discuss at lists.starlingx.io; Liu, Yang (YOW) Subject: [pytest] Please teach me how to use pytest on stx-openstack Hello: I am going to testing our stx-openstack through starlingx/test https://opendev.org/starlingx/test/src/branch/r/stx.3.0 May I ask for some tutorial and testing example? Many thanks. [http://mailhz.qiye.163.com/qiyeimage/logo/60511048/1576638602260.jpg] -- ————————————————————————————— 九州云信息科技有限公司 99CLOUD Inc. 于成德 产品开发部 邮箱(Email): yu.chengde at 99cloud.net 手机(Mobile): 13816965096 地址(Addr): 上海市局门路427号1号楼206 Room 206, Bldg 1, No.427 JuMen Road, ShangHai, China 网址(Site): http://www.99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Thu Jun 11 19:31:52 2020 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Thu, 11 Jun 2020 19:31:52 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20200611T021306Z Message-ID: Sanity Test from 2020-June-11 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200611T021306Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200611T021306Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Jun 11 22:35:10 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 12 Jun 2020 00:35:10 +0200 Subject: [Starlingx-discuss] StarlingX PTG overview blog post Message-ID: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> Hi, As discussed on the community call this week I typed up a summary[1] about the StarlingX PTG sessions. It is a draft version, but I wanted to share for feedback so that we can put it up on the blog early next week. As a general objective for the blog I kept it relatively high level with pointers where I had with further details so it is easy to read and those who are interested can follow up on specific items. Please keep this in mind when you review the text. Please leave comments or fixes in the etherpad __by the end of day Monday (June 15)__. Please let me know if you have any questions. Thanks, Ildikó [1] https://etherpad.opendev.org/p/stx-virtual-ptg-blog-june-2020 From bruce.e.jones at intel.com Thu Jun 11 23:06:30 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 11 Jun 2020 23:06:30 +0000 Subject: [Starlingx-discuss] StarlingX PTG overview blog post In-Reply-To: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> References: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> Message-ID: Wow, that looks amazingly good Ildiko, especially considering the time of day in your time zone when you were attending the PTG. Thank you! brucej -----Original Message----- From: Ildiko Vancsa Sent: Thursday, June 11, 2020 3:35 PM To: starlingx Subject: [Starlingx-discuss] StarlingX PTG overview blog post Hi, As discussed on the community call this week I typed up a summary[1] about the StarlingX PTG sessions. It is a draft version, but I wanted to share for feedback so that we can put it up on the blog early next week. As a general objective for the blog I kept it relatively high level with pointers where I had with further details so it is easy to read and those who are interested can follow up on specific items. Please keep this in mind when you review the text. Please leave comments or fixes in the etherpad __by the end of day Monday (June 15)__. Please let me know if you have any questions. Thanks, Ildikó [1] https://etherpad.opendev.org/p/stx-virtual-ptg-blog-june-2020 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Thu Jun 11 23:09:50 2020 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 11 Jun 2020 16:09:50 -0700 Subject: [Starlingx-discuss] StarlingX PTG overview blog post In-Reply-To: References: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> Message-ID: <477904d7-fbdc-76de-c63b-7b4bd8c642da@linux.intel.com> It's a great start, I tweaked on item in multios section, I think there might be some re-ordering of the paragaphs just to move some of the adoption/community stuff first and maybe the 4.0 and 5x planning second. I did not want to make those changes directly, but can. Sau! On 6/11/20 4:06 PM, Jones, Bruce E wrote: > Wow, that looks amazingly good Ildiko, especially considering the time of day in your time zone when you were attending the PTG. Thank you! > > brucej > > -----Original Message----- > From: Ildiko Vancsa > Sent: Thursday, June 11, 2020 3:35 PM > To: starlingx > Subject: [Starlingx-discuss] StarlingX PTG overview blog post > > Hi, > > As discussed on the community call this week I typed up a summary[1] about the StarlingX PTG sessions. > > It is a draft version, but I wanted to share for feedback so that we can put it up on the blog early next week. As a general objective for the blog I kept it relatively high level with pointers where I had with further details so it is easy to read and those who are interested can follow up on specific items. Please keep this in mind when you review the text. > > Please leave comments or fixes in the etherpad __by the end of day Monday (June 15)__. > > Please let me know if you have any questions. > > Thanks, > Ildikó > > [1] https://etherpad.opendev.org/p/stx-virtual-ptg-blog-june-2020 > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From build.starlingx at gmail.com Fri Jun 12 00:31:52 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 20:31:52 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 226 - Still Failing! In-Reply-To: <1399540446.1631.1591896280262.JavaMail.javamailuser@localhost> References: <1399540446.1631.1591896280262.JavaMail.javamailuser@localhost> Message-ID: <335343019.1641.1591921913022.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 226 Status: Still Failing Timestamp: 20200611T174357Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/ussuri/20200611T142734Z OS: centos MUNGED_BRANCH: ussuri MY_REPO: /localdisk/designer/jenkins/ussuri/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs MASTER_BUILD_NUMBER: 2 PUBLISH_LOGS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs MASTER_JOB_NAME: STX_build_master_ussuri MY_REPO_ROOT: /localdisk/designer/jenkins/ussuri PUBLISH_DISTRO_BASE: /export/mirror/starlingx/ussuri/centos/monolithic PUBLISH_TIMESTAMP: 20200611T142734Z DOCKER_BUILD_ID: jenkins-ussuri-20200611T142734Z-builder TIMESTAMP: 20200611T142734Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs LAYER: PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs From build.starlingx at gmail.com Fri Jun 12 03:01:49 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 23:01:49 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 421 - Failure! Message-ID: <1177108974.1644.1591930911993.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 421 Status: Failure Timestamp: 20200612T014342Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200612T013217Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20200612T013217Z DOCKER_BUILD_ID: jenkins-master-distro-20200612T013217Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200612T013217Z/logs FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20200612T013217Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro BUILD_ISO: false From build.starlingx at gmail.com Fri Jun 12 03:01:53 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Jun 2020 23:01:53 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_distro_master_master - Build # 148 - Failure! Message-ID: <1883025053.1647.1591930916294.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 148 Status: Failure Timestamp: 20200612T013217Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20200612T013217Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From zhipengs.liu at intel.com Fri Jun 12 04:05:44 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 12 Jun 2020 04:05:44 +0000 Subject: [Starlingx-discuss] Ussuri Test build failed In-Reply-To: References: Message-ID: Hi Scott, Root cause found! Please help double check your cengn script. In the log, I saw you added 4 repos exactly. But build-stx-bash.sh run just with 2 repos as you see in below log. I guess you need change "EXTRA_ARGS=" to "EXTRA_ARGS+=" + EXTRA_ARGS=' --repo stx-local-build,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/outputs/RPMS/std --repo stx-mirror-distro,http://build.starlingx.cengn.ca:80//mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/inputs/RPMS ' + '[' ussuri-stable-latest == ussuri-stable-latest ']' + EXTRA_ARGS=' --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ ' + /localdisk/designer/jenkins/ussuri/cgcs-root/build-tools/build-docker-images/build-stx-base.sh --os centos --os-version 7.5.1804 --stream stable --version ussuri-stable --user starlingx --registry docker.io --attempts 5 --push --latest --latest-tag=ussuri-stable-latest --clean --repo ussuri-ceph,http://build.starlingx.cengn.ca:80/mirror/centos/download.ceph.com/rpm-mimic/el7/x86_64/ --repo ussuri-wsgi,http://build.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7/sclo/x86_64/rh/ Thanks! Zhipeng -----Original Message----- From: Saul Wold Sent: 2020年6月12日 1:54 To: starlingx-discuss at lists.starlingx.io; Hu, Yong ; Liu, ZhipengS Subject: Ussuri Test build failed Zhipeng, Looks like there is a missing dependency issue with the Ussuri build see the logs [0]. Summary of Errors: Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libaprutil-1.so.0()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: system-logos >= 7.92.1-1 Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libaprutil-1.so.0()(64bit) Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) Requires: scl-utils Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libjansson.so.4()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libapr-1.so.0()(64bit) Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils Error: Package: httpd24-httpd-tools-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: libapr-1.so.0()(64bit) Error: Package: rh-python36-runtime-2.0-1.el7.x86_64 (ussuri-wsgi) Requires: scl-utils Error: Package: httpd24-runtime-1.1-19.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: /etc/mime.types Error: Package: httpd24-httpd-2.4.34-15.el7.x86_64 (ussuri-wsgi) Requires: policycoreutils-python Please take a look into this. Thanks Sau! [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200611T142734Z/logs/jenkins-STX_build_docker_base_image-238.log.html From build.starlingx at gmail.com Fri Jun 12 08:03:13 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 12 Jun 2020 04:03:13 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_ussuri - Build # 3 - Still Failing! In-Reply-To: <729016722.1634.1591896282936.JavaMail.javamailuser@localhost> References: <729016722.1634.1591896282936.JavaMail.javamailuser@localhost> Message-ID: <2050768555.1651.1591948993675.JavaMail.javamailuser@localhost> Project: STX_build_master_ussuri Build #: 3 Status: Still Failing Timestamp: 20200612T080012Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/ussuri/centos/monolithic/20200612T080012Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From ildiko.vancsa at gmail.com Fri Jun 12 09:04:32 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 12 Jun 2020 11:04:32 +0200 Subject: [Starlingx-discuss] StarlingX PTG overview blog post In-Reply-To: <477904d7-fbdc-76de-c63b-7b4bd8c642da@linux.intel.com> References: <0EC4D56B-D83F-430B-8584-AA480E62EE6D@gmail.com> <477904d7-fbdc-76de-c63b-7b4bd8c642da@linux.intel.com> Message-ID: Thanks for the quick review and suggestions! @Saul: I thought to start with the technical items and then transition to community and cross-project, but I’m easy on the order. I kind of lost track of what we talked about when exactly so I gave up on chronological order pretty quick. :) I’m fine with you making the change or if you’re more comfortable with me moving things around I can do it based on your preference. Let me know. Thanks, Ildikó > On Jun 12, 2020, at 01:09, Saul Wold wrote: > > It's a great start, I tweaked on item in multios section, I think there might be some re-ordering of the paragaphs just to move some of the adoption/community stuff first and maybe the 4.0 and 5x planning second. > > I did not want to make those changes directly, but can. > > Sau! > > > On 6/11/20 4:06 PM, Jones, Bruce E wrote: >> Wow, that looks amazingly good Ildiko, especially considering the time of day in your time zone when you were attending the PTG. Thank you! >> brucej >> -----Original Message----- >> From: Ildiko Vancsa >> Sent: Thursday, June 11, 2020 3:35 PM >> To: starlingx >> Subject: [Starlingx-discuss] StarlingX PTG overview blog post >> Hi, >> As discussed on the community call this week I typed up a summary[1] about the StarlingX PTG sessions. >> It is a draft version, but I wanted to share for feedback so that we can put it up on the blog early next week. As a general objective for the blog I kept it relatively high level with pointers where I had with further details so it is easy to read and those who are interested can follow up on specific items. Please keep this in mind when you review the text. >> Please leave comments or fixes in the etherpad __by the end of day Monday (June 15)__. >> Please let me know if you have any questions. >> Thanks, >> Ildikó >> [1] https://etherpad.opendev.org/p/stx-virtual-ptg-blog-june-2020 >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yatindra.shashi at intel.com Fri Jun 12 10:13:33 2020 From: yatindra.shashi at intel.com (Shashi, Yatindra) Date: Fri, 12 Jun 2020 10:13:33 +0000 Subject: [Starlingx-discuss] Unable to log in controller-1 after changing password on active controller-0 Message-ID: Hi All, In AIO-Duplex Setup 3.0 As after certain days Stx force user to change the Password, I changed the password in the Controller-0 but I did not do on the cont-1. I had locked/unlocked Cont-1 and tried to login with old/new password but I get access denied. Is there way to reset or change sysadmin Password of cont-1. I am able to login to dashboard and cont-0 with the password I had. Mit freundlichen Grüßen/ with best regards, Yatindra Shashi IoTG DE- Intel Corporation Munich, Germany P Save Paper, Go Digital :) Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Jun 12 16:17:28 2020 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 12 Jun 2020 09:17:28 -0700 Subject: [Starlingx-discuss] Ussuri Test build failed In-Reply-To: References: Message-ID: Turns out that Scott might have found that and fixed it in a second build that happened yesterday afternoon, but the Failure notification does not seem to have been sent. The failed build logs [0] still seem to show a variety of missing dependencies. There might also be another merge conflict. There were 10 failures: stx-cinder> ERROR: Could not find a version that satisfies the requirement google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) (from versions: 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.12, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.9.0, 1.9.1, 1.9.2, 1.9.3) > ERROR: No matching distribution found for google-api-python-client===1.7.11 (from -c /tmp/wheels/upper-constraints.txt (line 303)) stx-fm-rest-api > ERROR: Could not find a version that satisfies the requirement pecan===1.3.3 (from -c /tmp/wheels/upper-constraints.txt (line 21)) (from versions: 0.6.0, 0.6.1, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.9.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.1.0,