[Starlingx-discuss] Notes from the Nov 28th 2018 community meeting
Jones, Bruce E
bruce.e.jones at intel.com
Wed Nov 28 17:54:09 UTC 2018
Agenda and Notes - Nov 28th call
* We are looking into holding an in person community meeting in January to discuss planning for these changes. Hostsed by Intel in Phoenix (Chandler). Target date is January 15-16. Intel side we plan to send Bruce, David, Cindy, Cesar, Ada, Dean, Saul, Victor. Would like to see TSC & key folks from WR attend. All are welcome. Can we close on the dates and attendees?
o Dates are confirmed. Please start your travel arrangements.
o Bruce to get with Ildiko on eventbrite support.
* Patch resolution strategy update - Brent?
o To be discussed at this week's TSC call and then shared here next week.
* Sub-project status / help needed / issues?
o Build - there are a couple things in flight right now.
? We have a request to enable Go in the build. Will require network access during that part of the build. Needed for K8S. There are some tensions between how OpenStack works and how Go works. Devil in the details. Go handles dependencies by checking out master during builds, we need to figure out if we want to copy the dependent packages into our build (vendoring) or rely on upstream master. We need a strategy for how to handle builds and for how (if) to handle vendoring and rebasing vendored dependencies.
? We are working on a new download tool and have a new faster version. Working on a transition plan, adding support for Cengn mirror. Spec is posted.
? Cengn mirror is on a daily cadence - new ISO's coming out every day. Spec not yet closed.
* Ada - should we start running daily sanity on the Cengn ISO's? Yes! Ken - can we create a dashboard with the status of the sanity tests and links to any LP's filed? Ken - let's collaborate on that.
* Ian - how often / when do we clean up the Cengn servers? Do we have a retention policy? Ken - initial thoughts is to keep build artifacts for 7 days and ISOs for 14 days. Let's document the policy and automate the cleanup.
* Dean will do a monthly branch/MS release this week to keep the gears running.
* Test - Continue to work on test automation, dealing with some internal issues. Discussing how to move some testing into the open. We are evaluating our initial performance testing results with Wind River and working on how to change our testing. We are measuring VM recovery time, swact time, etc..
? Ian - we do not have an stx-test repo yet. Ada - our code is in internal git hub right now. We are discussing about how to migrate the code to open source. Ada to review a proposal to the TSC next week.
* Dist Cloud - Greg - we have a couple of specs out for review - stx spec out on synchronized keystone functionality (from the Edge WG). We have also submitted a glance spec out for review with the glance community for Stein.
* Docs - We have a direction for how to version the stx-docs repo / documents. Waiting on OSF to grant access to the web content. AR Bruce to ping Ildiko. Big changes coming to the API documentation, to start with stx-ha - we are reviewing each API against the docs, not expecting big changes but it's a lot of work. Working to update the build/install docs to take advantage of the Cengn repo and to document the new download tool.
* Distro - We discussed the kernel driver update patches - 6 pending testing. Team does not have QIT enabled hardware to test those changes. Machines on order. MIght not be available until next year. Testing for the rest of the drivers can proceed. Team is working on the init/config patches - we have completed 24/27 tasks in the SB entry. 3 changes out for review. This will eliminate 150+ patches. We can replace 13 SRPM packages with binary RPMs. Working on planning for next steps.
? Working on python2->3 upgrade in the distro code. Should be finished by EOY. Working on 3rd party OS packages - looking at 400 packages - 313 of them are believed to be safe, 28 packages need to be upgraded from 2 to 3. Remaining 56 do not have P3 support or need further work. We should look into which applications are using these packages and then discuss next steps.
? What Python3 version should we use - 3.5, .6, .7? Guidance is to follow OpenStack community practice - 3.5 is a safe bet but RHEL 8 beta has 3.6 in it and will likely be the "official" Stein version.
? QEMU upgrade - we need a branch on the staging trees to enable the code to be checked in.
* Security - All of our medium and higher security issues have been resolved, thank you all! We have two open Low priority issues, once a CVE that has not been fixed upstream yet. We have a few other hardening issues to address. We are working on our policy and process documents, community review would be welcome. We've posted a banned C function document for review, please take a look at the Security page on the wiki.
* Containers - We have been focused on the building blocks. We have most in place and can bring up a single server with openstack services in containers. We want to enable the community and will start by uploading docker images. Next steps is to focus on the larger configurations. Setting up Monday team meetings 1600 UTC 8AM PST.
* Releases - Mostly still resting. :) We need to update and/or confirm our release plans. Want to see the big deliverables / items and how they map over time. To be discussed at the TSC meeting.
* Networking - Focus continues on upstreaming against a prioritized list. Top of the list is the network segment spec which is central to the patch elimination strategy for the broader system. To be discussed at the next meeting with Miguel.
* MultiOS -
* DevStack - We have DevStack running with two services integrated with very minimal functionality. Dean's goal is to enable development of the Flock services using DevStack. We are working on installing the services now. Long term testing of APIs can be provided.
* Zuul -
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181128/f74673c6/attachment-0001.html>
More information about the Starlingx-discuss
mailing list