[Starlingx-discuss] distro.openstack meeting Oct 15 2019

Jones, Bruce E bruce.e.jones at intel.com
Tue Oct 15 13:38:24 UTC 2019


10/15/2019 meeting

TL;DR:  The Train update is nearing completion.  We need reviewers and Cores to review the changes below.  Yong to provide the team with info on the puppet issues seen in upgrading the OpenStack clients.  The team asks Chris F and Brent to read the thread below regarding "time to detect a failed VM" and provide any background or history they can.   Yong will be the new PL for the sub-project, no nominations for the TL role were made.


*         Train update status - Zhipeng https://storyboard.openstack.org/#!/story/2006544

  *   Basic deployment and VM creation/remove test pass in AIO setup!

  *   Eng build out for testing (Sanity requested) - one issue found and fixed, new image ready for Sanity

  *   Current status summarized as below.

  *   1)       Openstack upgrade
?  Finished 13 services upgrade, including nova, neutron, keystone, cinder, glance, placement, aodh, ironic, panko, barbican, ceilometer, heat, horizon.

  *   2)       Openstack-helm/helm-infra upgrade
?  Upgrade openstack-helm to below version.      13 patches removed
*         Are these patches still in the stx-upstream repo?  Patches look like they are removed in https://review.opendev.org/#/c/683886/
*         commit 82c72367c85ca94270f702661c7b984899c1ae38 //Date:   Sat Sep 14 06:40:03 2019 +0000
?  Upgrade openstack-helm-infra to below version.  1 patch removed
*         commit c9d6676bf9a5aceb311dc31dadd07cba6a3d6392 //Date:   Mon Sep 16 17:15:12 2019 +0000

  *   6 patches submitted for review, your comments are appreciated, thanks!
?  https://review.opendev.org/#/c/683886/
?  https://review.opendev.org/#/c/683910/
?  https://review.opendev.org/#/c/684166/
?  https://review.opendev.org/#/c/687441/
?  https://review.opendev.org/#/c/687197/
?  https://review.opendev.org/#/c/688105/

  *   Test plan status - Yong
?  Testing to include previous test plan plus ipv6

  *   Openstack clients - some of them depend on puppet versions that are not in CentOS 7 but are in CentOS 8.
?  Yong to send details to Dean
*         Detection of a failed VM - re-architecture discussion

  *   Victor has defined a set of performance tests (see the spec) and the Intel team has run then.

  *   The overall resutls look good except for two issues, one of which is "time to detect a failed VM".  StarlingX performance is poor on a performance test to measure this KPI relative to the original seed code.

  *   We had an internal discussion with the Intel Nova team - here are the results.  We are looking for @Chris and @Brent to provide some missing history of this work.

  *   Eric Fried wrote:
?  [Later] After skimming through the nova code, I can see where we're reacting to certain libvirt.VIR_DOMAIN_EVENT_*s by sending lifecycle events to the handler registered by the compute manager. That handler is set up to send notifications, and also to take actions in special cases.
?  I'm only guessing, but I suspect the existing "special cases" are not something we necessarily want to be building on, so I kind of doubt the "take some action" part of the chain would be something nova would want to upstream.
?  However, I think you could definitely make a case for adding support for a new lifecycle event type corresponding to one or more additional VIR_DOMAIN_EVENT_*s, and include those in notifications. At that point, an external orchestrator (is that what StarlingX is?) can listen for those notifications and react accordingly (by destroying the instance, rebooting it, whatever).
?  I also think you could make a case for changing the VM's state when one of these new transitions occurs; though I'm not sure whether a new vm_state would be okay, or whether you would just have to use _STOPPED.
?  I think the best way forward here is to propose a blueprint & spec to hammer out the details of the above. I think it should be a fairly straightforward thing to get technical agreement on. And resurrecting the code itself would be no big deal.

  *   Yongli wrote:
?  That all related to handle KVM server CRASHED status, and recover the compute,  but depending on Tic VIM software.
?  It contains following sub parts
?  *         2d6afc9 US80863: Port KVM failure detection and recovery
?  *         f383c58 US80977: Fix handling of KVM failure in mitaka
?  *         8cfdb1a CGTS-7054: synchronize _sync_instance_power_state() from LifeCycle events
?  *         cf95427 CGTS-7054: VMs remain in shutoff state after killing kvm process.
?  BTW: weird, these features listed in Cinder section in PTG of Stein: https://etherpad.openstack.org/p/nova-ptg-stein

  *   Dean wrote:
?  For the record, this is in the released patch files as [0] which was a squash of at least six prior commits in earlier Titanium releases:
?   * 0018.01 R3: 2d6afc9 Port KVM failure detection and recovery
?    * 0018.01.01 R2: 7063b69 VIM: Refactor KVM failure detection and recovery
?     * 0018.02 80cfc70 add audit to clean up orphan instances
?      * 0018.02.01 6138f54c add audit to clean up orphan instances
?      * 0018.02.02 5342042f Trouble destroying illegitimate instance
?       * 0018.02.03 ac7b182d improve orphan audit robustness
?     * 0018.03 f383c58 Fix handling of KVM failure in mitaka
?     * 0018.04 8cfdb1a synchronize _sync_instance_power_state() from LifeCycle events
?     * 0018.05 cf95427 VMs remain in shutoff state after killing kvm process.
?     * 0018.06 d20e24a nova: Bug 272: orphan vm is not removed
?  I have broken out the 0018 patch as shown above in [1] as 0018.XX[.YY]-* so they can be considered separately as required.
?  Mario's patch appears to be basically 0018.01, 0018.03, 0018.04 and 0018.05.
?  [0] https://github.com/starlingx-staging/stx-nova/blob/stx/old-master/stx-patches/0018-primary-KVM-failure-detection-recovery-and-orphan-in.patch
?  [1] https://github.com/dtroyer/stx-nova/tree/stx-patch/stx-patches
*         No election is needed.  Yong Hu will be the new PL for this project.  No self-nominations for the TL position were made.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20191015/59401ae6/attachment-0001.html>


More information about the Starlingx-discuss mailing list