Hi Chris and Bob, Could you help add +2 and W+1 for last patch again? https://review.opendev.org/731668 Fix render error in cinder during openstack-helm rebase Last comment is for commit message, and the patch has been updated for commit message. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年6月17日 1:06 To: Friesen, Chris <Chris.Friesen@windriver.com>; Church, Robert <Robert.Church@windriver.com>; starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris and Bob, Could you help to review below Ussuri upgrade patches again? https://review.opendev.org/#/q/topic:for_ussuri+(status:open) We need your great help to push them merge! Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: 2020年6月15日 23:49 To: Friesen, Chris <Chris.Friesen@windriver.com>; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Chris, Thanks a lot for your comments to our ussuri upgrade patches though it comes a little late. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Last Friday, I have replied to your comments one by one and updated related commit messages according to your proposal. Just want to know if you still have further concern on these patches. As you know our openstack upgrade task is in the final mile for STX 4.0, I’d like to work closely with you to push them get merged this week. Thanks!! Zhipeng -----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: 2020年6月11日 9:43 To: Scott Little <scott.little@windriver.com>; starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Hi Scott, I have fixed merge conflict now! If you have any concern, please let me know. Thanks! Zhipeng -----Original Message----- From: Scott Little <scott.little@windriver.com> Sent: 2020年6月11日 4:28 To: starlingx-discuss@lists.starlingx.io; Liu, ZhipengS <zhipengs.liu@intel.com> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!! Six of the nine updates are in a state of merge conflict. Please resolve the conflicts so that I can make progress wit a CENGN build. Scott On 2020-06-10 9:20 a.m., Scott Little wrote:
CENGN cycles aren't a problem. People resources is a challenge.
So the ask is for a manual build, on CENGN, adding in the nine patches listed by https://review.opendev.org/#/q/topic:for_ussuri+(status:open).
.. and the addition of two repos to the build-stx-base.sh step
build-stx-base.sh --repo local-stx-build,... \ --repo stx-distro,... \ --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/
Is that correct?
Scott
On 2020-06-09 9:04 a.m., Saul Wold wrote:
Frank, Scott, Davelet:
Are there cycles available on Cengn (and people resources) to do a Cengn build with the Ussuri patch set applied? I know this is different than a branch build. I think we have done this kind of thing in the past.
This might help to make sure we don't have any more Cengn build issues and could give the Test team a sanity spin with a Ussuri/Cengn build.
Note there is a comment for Scott/Davelet at the bottom of Zhipeng's email.
Thanks Sau!
On 6/9/20 1:39 AM, Liu, ZhipengS wrote:
Hi all,
So far, all block issues and concerns have been addressed. Since we have passed all sanity test, and Ussuri OpenStack has been officially released last month, there should be no more reason to block these patches merge.
Next step: Let's push to get ussuri upgrade/openstack-helm rebasing patches merged. We need great help from core guys! https://review.opendev.org/#/q/topic:for_ussuri+(status:open)
# Below 6 patches are for OpenStack-helm/infra rebase. (we set first patch with workflow-1 and add depends-on for other patches as we need to merge them together.) Upgrade openstack-helm-infra zhipeng liu starlingx/openstack-armada-app workflow-1 Add mariadb database config override to support ipv6 zhipeng liu starlingx/openstack-armada-app Fix render error in cinder during openstack-helm rebase zhipeng liu starlingx/openstack-armada-app Update download list for openstack-helm upgrade zhipeng liu starlingx/openstack-armada-app Update manifest.yaml file for openstack-helm upgrade. zhipeng liu starlingx/openstack-armada-app Upgrade openstack-helm zhipeng liu starlingx/openstack-armada-app
# Below 3 patches is for OpenStack upgrade. Update manifest.yaml file for ussuri openstack YU CHENGDE starlingx/openstack-armada-app Modify build-tools and stable-wheels for Ussuri upgrading YU CHENGDE starlingx/root Upgrade openstack docker images for stable/ussuri YU CHENGDE starlingx/upstream
After removing required python3 dependent packages from local, we can build out base image and OpenStack service images successfully with below command. ==================================================================== ===========
@Scott, please help to update cengn build script with below 2 additional repos and help to trigger image build build-stx-base.sh --repo local-stx-build,... \ --repo stx-distro,... \ --repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \ --repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/
Thanks a lot! Zhipeng
-----Original Message----- From: Liu, ZhipengS Sent: 2020年6月8日 16:54 To: 'Miller, Frank' <Frank.Miller@windriver.com>; starlingx-discuss@lists.starlingx.io; Friesen, Chris <Chris.Friesen@windriver.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi Frank,
It is not easy to figure out whether/how/when OpenStack-helm-info upstream introduce this issue and then fix it. I also could not find any fix in LP[1], which just mentioned that this intermittent issue not hit us after some changes in related field.
Anyhow, below 2 patches should fix potential bug and I could not see the same error log again in our ussuri upgrade EB. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death
Since we have passed fully test, we'd better push to merge ussuri upgrade/openstack-helm rebasing patches soon. https://review.opendev.org/#/q/topic:for_ussuri+(status:open)
[1] https://bugs.launchpad.net/starlingx/+bug/1816842/
Thanks! Zhipeng -----Original Message----- From: Miller, Frank <Frank.Miller@windriver.com> Sent: 2020年6月5日 22:32 To: Liu, ZhipengS <zhipengs.liu@intel.com>; starlingx-discuss@lists.starlingx.io; Friesen, Chris <Chris.Friesen@windriver.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Zhipeng:
This looks promising. Your theory is that the 2 openstack-helm-infra patches will fix the mariadb recovery issues. These 2 patches were merged in the openstack-helm-infra project in January and February of 2020. What would be good to know is what broke mariadb recovery between April of 2019 when Chris Friesen finished up his story [1] and our current loads today. The most likely explanation is the upversion of Train or the upversion to openstack-helm-infra done in November 2019 introduced the mariadb recovery issues. And then the openstack-helm folks found and fixed the issue earlier in 2020.
If we had more time the preferred approach would be to merge just the openstack-helm-infra changes first to prove they address mariadb recovery and then in a separate commit merge Ussuri. But since you have validated that mariadb recovers with your Ussuri branch and this branch has these openstack-helm commits, I support letting Ussuri merge into stx.4.0.
Frank [1] https://storyboard.openstack.org/#!/story/2004712
-----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: Friday, June 05, 2020 2:36 AM To: Miller, Frank <Frank.Miller@windriver.com>; starlingx-discuss@lists.starlingx.io; Friesen, Chris <Chris.Friesen@windriver.com> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi Frank,
As for OpenStack not recovering after both controllers are reset [1] I could not reproduce this issue with my Ussuri upgrade EB. My test step is: 1) ssh to standby controller and sudo reboot -f for it. 2) sudo reboot -f for activated controller All pods can resume after a while.
However, I could reproduce this issue with DB 20200516T080009Z. From error logs, it is an old issue analyzed by Chris Friesen in [2] early last year.
In ussuri upgrade EB, we rebased openstack-helm-infra/mariadb. It includes below 2 patches which fixed this stability issue. https://review.opendev.org/#/c/704034/ Prevent splitbrain during full Galera restart https://review.opendev.org/#/c/708071/ mariadb: avoid state management thread death
[1] https://bugs.launchpad.net/starlingx/+bug/1881899 [2] https://bugs.launchpad.net/starlingx/+bug/1816842/comments/3
Thanks! Zhipeng
-----Original Message----- From: Miller, Frank <Frank.Miller@windriver.com> Sent: 2020年6月3日 22:35 To: Liu, ZhipengS <zhipengs.liu@intel.com>; starlingx-discuss@lists.starlingx.io; Church, Robert <Robert.Church@windriver.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Zhipeng:
This is not a new requirement. Users expect the software to recover when resets occur.
As I had mentioned at the PTG yesterday I know personally that this test passed in stx3.0 before the upversion to train. Someone else who performs testing can look to determine when this test was done as part of feature testing after train was delivered as it should have been tested as part of stx.3.0 as well. I do not know when this started to break. One topic we will discuss at the PTG tomorrow will be how to improve our test coverage and automation so this type of issue can be found immediately as new code is being delivered.
Frank
-----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: Wednesday, June 03, 2020 10:28 AM To: Miller, Frank <Frank.Miller@windriver.com>; starlingx-discuss@lists.starlingx.io; Church, Robert <Robert.Church@windriver.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Frank,
Have we pass this case before? Is it a new requirement?
Thanks! Zhipeng
-----Original Message----- From: Miller, Frank <Frank.Miller@windriver.com> Sent: 2020年6月3日 22:12 To: Miller, Frank <Frank.Miller@windriver.com>; Liu, ZhipengS <zhipengs.liu@intel.com>; starlingx-discuss@lists.starlingx.io; Church, Robert <Robert.Church@windriver.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Yong/Zhipeng - the LP for openstack not recovering after both controllers are reset is https://bugs.launchpad.net/starlingx/+bug/1881899
Ovidiu is investigating and will provide any updates from his investigation. Please continue to keep us informed of your investigation.
Frank
-----Original Message----- From: Miller, Frank <Frank.Miller@windriver.com> Sent: Tuesday, June 02, 2020 10:38 PM To: Liu, ZhipengS <zhipengs.liu@intel.com>; starlingx-discuss@lists.starlingx.io; Church, Robert <Robert.Church@windriver.com> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
We used a build from May 28.
As for the decoupling issue these are actively being worked. If you run the system helm-override-show command when the stx-openstack app is applied you won’t see the CLI command fail. It only fails when you try a helm-override-show when the app is in uploaded state. In any case this will be fixed shortly.
Frank
-----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: Tuesday, June 02, 2020 10:04 PM To: Miller, Frank <Frank.Miller@windriver.com>; starlingx-discuss@lists.starlingx.io; Church, Robert <Robert.Church@windriver.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi Frank,
Thanks for your quick update! Which build are you using to test this case? Since decoupling commits introduced several regressions (at least 2), not propose to do this kind of stability test with latest build. BTW, do we have plan to revert them considering this stability risk? Our Ussuri upgrade patches is waiting for it☹
Furthermore, we have not seen this test case that force reboot both controllers at the same time. Is it a new requirement? If not , have we pass this case before, which build? I'd like to help on it with the pass build for comparative analysis. From my point , mariadb might not work if we reboot both controllers.
Thanks! Zhipeng
-----Original Message----- From: Miller, Frank <Frank.Miller@windriver.com> Sent: 2020年6月3日 8:55 To: Miller, Frank <Frank.Miller@windriver.com>; Liu, ZhipengS <zhipengs.liu@intel.com>; starlingx-discuss@lists.starlingx.io; Church, Robert <Robert.Church@windriver.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Zhipeng:
An update on our testing and analysis today. We are able to reproduce the issue with OpenStack not recovering when we trigger a reboot of both AIO controllers at the same time. This results in MariaDB and multiple other OpenStack pods in CrashLoopBackoff and openstack commands not working indefinitely after the controllers recover. We'll create a launchpad tomorrow to track this issue.
Frank
-----Original Message----- From: Miller, Frank <Frank.Miller@windriver.com> Sent: Tuesday, June 02, 2020 12:25 PM To: Liu, ZhipengS <zhipengs.liu@intel.com>; starlingx-discuss@lists.starlingx.io; Church, Robert <Robert.Church@windriver.com> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Thanks Zhipeng for the analysis. What is challenging here is the multitude of issues.
In our debug of openstack the past few days we are seeing the app fail completely. After investigation this issue is a Day 1 containerd issue. This is tracked in LP: https://bugs.launchpad.net/starlingx/+bug/1881353
The issue you are seeing on a swact is a new and very recent issue tied to the decoupling commits that were merged late last week. Bob is investigating and I expect he'll have a fix soon for that.
But the issues we are most concerned with are when we see mariadb crashing and not able to recover or with openstack services not working for longer periods of time. We're attempting to isolate the sequence of events that trigger this.
Frank
-----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: Tuesday, June 02, 2020 11:47 AM To: Miller, Frank <Frank.Miller@windriver.com>; starlingx-discuss@lists.starlingx.io; Church, Robert <Robert.Church@windriver.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied I also tested with daily build 20200516T080009Z. However, it could not be reproduced. We should fix this regression ASAP!
Thanks! Zhipeng
-----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: 2020年6月2日 16:48 To: Miller, Frank <Frank.Miller@windriver.com>; starlingx-discuss@lists.starlingx.io; Church, Robert <Robert.Church@windriver.com> Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi Frank and all,
Update for issue 2. I raised a new LP to track it. https://bugs.launchpad.net/starlingx/+bug/1881722 Below is the time statistics. It seems reasonable. No obvious issue found. 1) 3~4min for host restart and get ready. 2) 2~3min for mariadb terminating, initialization, get ready. (then configmap sync is ready) 3) 2min for ovs-db ready (reduce probe live/ready timer can improve a little, as it can retry quickly to connect ovs-vsctl: unix:/var/run/openvswitch/db.sock) 4) 1min for other pods ready, like neutron-ovs-agent which depends on ovs-db. ) Any comment?
For LP https://bugs.launchpad.net/starlingx/+bug/1881454 Unable to unlock controller after swact and lock w/ openstack applied And https://bugs.launchpad.net/starlingx/+bug/1881711 system helm-override-show stx-openstack mariadb openstack crash It seems related to openstack plugin decouple related patches. Should be a regression. Please see our update in this 2 LPs for detail info. @Bob, could you pls help further check it and your patches, thanks!
Thanks! Zhipeng
-----Original Message----- From: Liu, ZhipengS Sent: 2020年6月1日 16:20 To: 'Miller, Frank' <Frank.Miller@windriver.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Jascanu, Nicolae <nicolae.jascanu@intel.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi Frank,
I also tested the issue 2 with latest daily build on duplex setup. The conclusion is that the issue is there all the time. This issue might not be fixed soon, but should not block OpenStack upgrade, right?
For 9 OpenStack patches below, I have removed all workflow-1, except the first patch and add depends-on all them. https://review.opendev.org/#/q/topic:for_ussuri+(status:open) Your review and comments are welcome!
As for issue 2, some detail info FYI. It also needs to wait for around 10 min before all pods are ready again after reboot for master build. It stuck on below 2 pods for 10 min. The same as the one I saw with my OpenStack upgrade engineering build. neutron-ovs-agent-controller-0-937646f6-xxznw(depends openvswitch-db) openvswitch-db-8fxkw Related key logs below. Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 2m19s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-db-token-7g4qk" : failed to sync secret cache: timed out waiting for the condition Warning FailedMount 105s kubelet, controller-1 MountVolume.SetUp failed for volume "openvswitch-bin" : failed to sync configmap cache: timed out waiting for the condition Warning Unhealthy 30s kubelet, controller-1 Liveness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied) Warning Unhealthy 7s kubelet, controller-1 Readiness probe failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied)
Is it the same stability issue as the one reported from your test team? I can only see this issue after force rebooting. What is our expected recovery time? Your comment is appreciated!
Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS Sent: 2020年5月29日 9:42 To: 'Miller, Frank' <Frank.Miller@windriver.com>; starlingx-discuss@lists.starlingx.io; Jascanu, Nicolae <nicolae.jascanu@intel.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi Frank,
Glad to see your quick reply!! For OpenStack upgrade task, we have finished all test and get patches ready for more than 2 weeks, but no any review comments and feedback from your side. What's the next step?
For issue # 2, in community meeting notes, I saw that you had some stability issue from WR local test team. But so far, I do not see any LP for the detail info. You should ask them to do that! Right?
According to your concern, I tried to reproduce it with my build (cherry pick OpenStack upgrade patches)yesterday, and the original issue [1] was not seen any more, mariadb got ready quickly, no regression.
[1] https://bugs.launchpad.net/starlingx/+bug/1855474
Thanks! Zhipeng
-----Original Message----- From: Miller, Frank <Frank.Miller@windriver.com> Sent: 2020年5月29日 1:07 To: Liu, ZhipengS <zhipengs.liu@intel.com>; starlingx-discuss@lists.starlingx.io; Jascanu, Nicolae <nicolae.jascanu@intel.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Thanks Zhipeng.
Good to see progress on IPv6. Waiting for 10 minutes for pods to recover isn't a good result. Is there a LP open on this issue? Which pods are not ready? What can you tell us about this 10 minute outage?
Frank
-----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: Thursday, May 28, 2020 5:06 AM To: Miller, Frank <Frank.Miller@windriver.com>; starlingx-discuss@lists.starlingx.io; Jascanu, Nicolae <nicolae.jascanu@intel.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi Frank,
Nicolae already added test case description. Thanks Nicolae!
I also did below test on AIO-DX virtual setup, exactly according to your mentioned steps. No issue found, but just need to wait for around 10 min before all pods are ready again after reboot.
For ipv6 issue, I have submitted new patch for it since dynamic override for database config did not work. https://review.opendev.org/#/c/731461/ https://review.opendev.org/#/c/731470/
Thanks! Zhipeng
-----Original Message----- From: Miller, Frank <Frank.Miller@windriver.com> Sent: 2020年5月27日 22:43 To: Liu, ZhipengS <zhipengs.liu@intel.com>; starlingx-discuss@lists.starlingx.io; Jascanu, Nicolae <nicolae.jascanu@intel.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Zhipeng:
Thanks for the info. You have provided the # of testcases but not what those testcase do. Where can I find a description of what the OpenStack testcases do?
For the controller reset testcases I'd like to see the test result for the following: Is openstack usable during the following scenarios on AIO-DX and on Standard configurations: - Lock/unlock of standby controller - reset (ie: reboot -f) of the standby controller - reset (ie: reboot -f) of the active controller - reapply of stx-openstack after the above scenarios
Frank
-----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: Wednesday, May 27, 2020 9:15 AM To: Miller, Frank <Frank.Miller@windriver.com>; starlingx-discuss@lists.starlingx.io; Jascanu, Nicolae <nicolae.jascanu@intel.com> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi Frank,
We have done below tests. 1) Sanity tests by Nicolae. AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS]
TOTAL: [ 61 TCs ]
AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS]
TOTAL: [ 64 TCs ]
Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS]
TOTAL: [ 65 TCs ]
Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS]
TOTAL: [ 66 TCs ]
2) NFV scenario test by me on duplex/multi standard virtual setup duplex bare metal setup ===== Setup ==================================================================== ============================================================= 2020-05-14 02:30:05.524 Create flavor small ........................................ [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral .............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor small_swap ................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor small_ephemeral_swap ......................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium ....................................... [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral ............................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_swap .................................. [OKAY] 2020-05-14 02:30:05.524 Create flavor medium_ephemeral_swap ........................ [OKAY] 2020-05-14 02:30:05.653 Create image cirros ........................................ [OKAY] 2020-05-14 02:30:05.695 Create volume cirros ....................................... [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral ............................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-swap .................................. [OKAY] 2020-05-14 02:30:05.695 Create volume cirros-ephemeral-swap ........................ [OKAY] 2020-05-14 02:30:05.695 Create volume empty_volume ................................. [OKAY] 2020-05-14 02:30:05.786 Create network internal .................................... [OKAY] 2020-05-14 02:30:06.158 Create network external .................................... [OKAY] 2020-05-14 02:30:06.772 Create subnet internal ..................................... [OKAY] 2020-05-14 02:30:07.661 Create subnet external ..................................... [OKAY] 2020-05-14 02:30:08.553 Create instance cirros-1 ................................... [OKAY] 2020-05-14 02:30:29.918 Create instance cirros-ephemeral-1 ......................... [OKAY] 2020-05-14 02:30:43.160 Create instance cirros-swap-1 .............................. [OKAY] 2020-05-14 02:30:56.101 Create instance cirros-ephemeral-swap-1 .................... [OKAY] 2020-05-14 02:31:09.077 Create instance cirros-image-1 ............................. [OKAY] 2020-05-14 02:31:21.241 Create instance cirros-image-with-volumes-1 ................ [OKAY] ==================================================================== ==================================================================== ===== ===== Test Iteration 0 (single-execution) ==================================================================== =============================== 2020-05-14 02:33:04.172 Test Instance-Pause ........................................ [OKAY] (2020-05-14 02:33:18.078 Δ=0:00:12.870) 2020-05-14 02:33:35.073 Test Instance-Unpause ...................................... [OKAY] (2020-05-14 02:33:41.608 Δ=0:00:05.866) 2020-05-14 02:33:53.049 Test Instance-Suspend ...................................... [OKAY] (2020-05-14 02:33:59.546 Δ=0:00:05.792) 2020-05-14 02:34:11.103 Test Instance-Resume ....................................... [OKAY] (2020-05-14 02:34:17.756 Δ=0:00:05.937) 2020-05-14 02:34:29.269 Test Instance-Reboot (soft) ................................ [OKAY] (2020-05-14 02:36:45.923 Δ=0:02:15.748) 2020-05-14 02:37:02.160 Test Instance-Reboot (hard) ................................ [OKAY] (2020-05-14 02:37:14.504 Δ=0:00:11.704) 2020-05-14 02:37:30.673 Test Instance-Stop ......................................... [OKAY] (2020-05-14 02:38:44.543 Δ=0:01:13.220) 2020-05-14 02:39:00.481 Test Instance-Start ........................................ [OKAY] (2020-05-14 02:39:07.198 Δ=0:00:06.068) 2020-05-14 02:39:18.578 Test Instance-Live-Migrate ................................. [OKAY] (2020-05-14 02:39:41.692 Δ=0:00:22.306) 2020-05-14 02:39:57.927 Test Instance-Cold-Migrate ................................. [OKAY] (2020-05-14 02:41:22.720 Δ=0:01:24.179) 2020-05-14 02:41:38.995 Test Instance-Cold-Migrate-Confirm ......................... [OKAY] (2020-05-14 02:41:45.441 Δ=0:00:05.884) 2020-05-14 02:41:57.108 Test Instance-Cold-Migrate-Revert .......................... [OKAY] (2020-05-14 02:43:36.381 Δ=0:00:21.637) 2020-05-14 02:43:52.320 Test Instance-Resize ....................................... [OKAY] (2020-05-14 02:45:16.409 Δ=0:01:22.812) 2020-05-14 02:45:32.723 Test Instance-Resize-Confirm ............................... [OKAY] (2020-05-14 02:45:39.119 Δ=0:00:05.777) 2020-05-14 02:45:50.437 Test Instance-Resize-Revert ................................ [OKAY] (2020-05-14 02:47:30.175 Δ=0:00:21.748) 2020-05-14 02:47:46.230 Test Instance-Rebuild ...................................... [OKAY] (2020-05-14 02:48:59.762 Δ=0:01:12.980) Total-Tests: 16 Execution-Time: 0:16:11.676
3) Another 2 test a) Using IPv6 It can pass with workaround now. I need one more fix for it. In my previous patch https://review.opendev.org/#/c/716524 (merged), I dynamically override below config_override: | [mysqld] bind_address=:: However, it did not work now. From log, it shows error "OpenStack-Helm Mariadb - INFO - b'error: Found option without preceding group in config file: /etc/mysql/conf.d/20-override.cnf at line: 1'" I tried many methods, but could not remove the first line in 20-override.cnf mysql@mariadb-server-0:/etc/mysql/conf.d$ cat 20-override.cnf |- [mysqld] bind_address=:: I can only add it in manifest.yaml as a static override like below. values: conf: database: config_override: | [mysqld] bind_address=:: b) Reset of controllers and check status of OpenStack while a controller is rebooting. I have tested it and pass on simplex. For duplex, I have a setup issue in my side. @Jascanu, Nicolae Could you help me on it for duplex test, if you have time today. Thanks!
Zhipeng
-----Original Message----- From: Miller, Frank <Frank.Miller@windriver.com> Sent: 2020年5月26日 21:13 To: Liu, ZhipengS <zhipengs.liu@intel.com>; starlingx-discuss@lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Zhipeng:
Can you publish the list of tests that have been run for openstack?
Also has openstack been tested for the following scenarios: 1) Using IPv6 2) Reset of controllers and check status of openstack while a controller is rebooting?
Frank
-----Original Message----- From: Liu, ZhipengS <zhipengs.liu@intel.com> Sent: Monday, May 25, 2020 3:14 AM To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi all,
We have passed all sanity test on all setup. Thanks Nicolae!! We also built out OpenStack service images from layered build environment.
Please help to review and push below patches to be merged, thanks! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat us)
BRs Zhipeng
-----Original Message----- From: Liu, ZhipengS Sent: 2020年5月14日 16:49 To: 'Saul Wold' <sgw@linux.intel.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io> Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi all,
Call for patch review again! https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+stat us)
Thanks! Zhipeng
-----Original Message----- From: Liu, ZhipengS Sent: 2020年5月9日 8:38 To: Saul Wold <sgw@linux.intel.com>; starlingx-discuss@lists.starlingx.io Subject: RE: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Agree!
-----Original Message----- From: Saul Wold <sgw@linux.intel.com> Sent: 2020年5月9日 0:29 To: starlingx-discuss@lists.starlingx.io Subject: Re: [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
I would strengthen that to no changes until we get Green Sanity other than what's required to make them Green.
Full Stop!
Sau!
On 5/8/20 9:05 AM, Miller, Frank wrote:
Until we can get sanity passing for several days in a row I strongly suggest we do not allow any further changes into the load related to OpenStack. Folks can continue with reviews but let’s hold off allowing merges related to a new OpenStack version.
Frank
*From:*Liu, ZhipengS <zhipengs.liu@intel.com> *Sent:* Friday, May 08, 2020 11:59 AM *To:* starlingx-discuss <starlingx-discuss@lists.starlingx.io> *Cc:* YU CHENGDE <yu.chengde@99cloud.net>; Penney, Don <Don.Penney@windriver.com> *Subject:* [Starlingx-discuss] [OpenStack Ussuri Upgrade Task] Call for patch review!!
Hi all,
Please help to review OpenStack Ussuri upgrade patches.
Our target is to get all below patches merged by end of next week.
https://review.opendev.org/#/q/topic:for_ussuri+(status:open+OR+sta tus :merged)
During OpenStack upgrade for StarlingX, we have to move python2.7 to python3.6 for OpenStack services as ussuri release only support python3.
We also rebased openstack-helm/helm-infra to latest version.
Engineering build test status.
1. nfv_scenario_tests PASS on simplex/duplex/multi virtual setup. 2. nfv_scenario_tests PASS on simplex bare metal setup. 3. Sanity test is ongoing. Duplex/standard virtual setup test PASS.
Thanks!
Zhipeng
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus s
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss