General Links * http://ptg.openstack.org/ptg.html * https://etherpad.opendev.org/p/stx-ptg-planning-april-2021 (these notes) Time slots * Tuesday April 20 1300 UTC - 1700 UTC * Wednesday April 21 1300 UTC - 1700 UTC * 1300 UTC, Mitaka room - Joint session with the OpenInfra Edge Computing Group * Thursday April 22 1300 UTC - 1700 UTC Use Tuesday slot for "State of the project" Use remaining slots for release planning and feature discussions Attendees (name, company, email, activities/interest in StarlingX) * Ildiko Vancsa <Open Infrastructure Foundation, ildiko@openinfra.dev, Community Manager for StarlingX> * Greg Waines, greg.waines@windriver.com, TSC member, Starlingx Security * Bruce Jones, bruce.e.jones@Intel.com, TSC lead, Docs core reviewer * Dariush Eslimi, <Dariush.Eslimi@windriver.com>, TSC member, PL flock,DC,config * Ghada Khalil, Wind River, <ghada.khalil@windriver.com>, StarlingX Release Prime & Security PL * Frank Miller frank.miller@windriver.com, PL build, containers> * Bart Wensley <barton.wensley@windriver.com>, TL Flock Services and Distributed Cloud * Thiago Brito <thiago.brito@windriver.com>, stx-openstack * Austin Sun <austin.sun@intel.com> , openstack distro project PL * Saul Wold <saul.wold@windriver.com> ex-TSC member and ex-Distro/Build Lead (supporting Mark on Debain Transition) * Shuquan Huang <huang.shuquan@99cloud.net> TSC member * Mary Camp, <maryx.camp@intel.com>, Docs Project lead. * Mingyuan Qi <mingyuan.qi@intel.com> TSC member * Matt Peters <matt.peters@windriver.com>, Wind River * Bill Zvonar <bill.zvonar@windriver.com>, Wind River * Mark Asselstine, WR, mark.asselstine@windriver.com, proposed TL for STX Distro/Build * Tytus Kurek <tytus.kurek@canonical.com>, Canonical * Haridhar Kalvala <haridhar.kalvala@intel.com> Intel, FM containerization * Michel Thebeau <michel.thebeau@windriver.com>, Wind River, Security with Ghada and Greg * Shuai.Zhao <zhaos@neusoft.com>, Neusoft, Participated in CentOS8 STX Kernel upgrade, SRPM upgrade * Steve Geary <steve.geary@windriver.com>, Wind River * George Kalpaktsoglou <gkalpak@fogus.gr>, Fogus Innovation * Nicolae Jascanu <nicolae.jascanu@intel.com>, Intel Validation Team * Weiyuan.Wang <wangweiyuan@neusoft.com>, Neusoft * Ramaswamy Subramanian <ramaswamy.subramanian@windriver.com>, Wind River * Chuck Short <charles.short@windrivers.com>, Wind River PTG Topics * "State of the project" / Retrospective / big ticket items * Feedback ? Items to continue doing * Large feature content in R5 * Docs are improving ... much more content * Good support from the community for users * Release notes system seems to work well, having good information there * Release cycle is much more on time for 5.0 o No big anchor features o Better planning on the execution of feature development as well as moving features to next release if they don't fit o New time for release management meeting works much better * Testing automation o Should look into status and action plans o Review on what's in the sanity tests * Build system works well for now, need to look into it for the OS change work ? Items to stop doing/improve * Use IRC more and be more responsive there +1 and the mailing list, +1 * Contributor diversity to improve, +1 * Need to reduce barriers to entry and adoption * Doc clarity / organization could be improved * Decreas project complexity and make the project more modular o Some items will be discussed during the Debian work session o There are some plans to break up the monolith <-- build system updates needed o How is this related to making the project more appealing and easier to approach? ? Smaller more standalone components are easier to understand and easier to contribute to ? What is the value of smaller components to break out and maintain? * design to change * easier to debug when components can be turned off * components can be versioned separately * make OS small to make it easier to swap up o How to leverage the most out of the change the Debian work brings in? ? Build should get easier ? Drop some of the technical debt and make better documentation ? Work towards being OS-independent * Launchpad o Tagging and updates are spotty o PLs probably need to manage bug backlog in the tool better * Project team structure o Current: https://docs.starlingx.io/governance/reference/tsc/projects/index.html o Preferred ? Containers team/activity is probably obsolete ? How is the work organized within the community? * Review structure * Bug handling * Integration between components * Areas o Documentation o Build and Integration o Code ? e.g. a proposal * Projects o Infrastructure * Build * Release * Test * Docs o Code * OS * BareMetal Mgmt * Kubernetes * OpenStack * Distributed Cloud * * CentOS changes * Community / user adoption / lowering the barriers to entry ? Training * resurrect the StarlingX Workshops [Greg] o virtual workshops * make content available * supporting on-boarding documentation to add/update/give it more structure o pointers to project documentation that StarlingX integrates ? Kubernetes quick start guide ? OpenStack guides ? etc ? Helping users * ( believe the majority of new community members asking questions on mailing list are users vs contributors ) * Installation seems to be tricky for new comers o Documentation is great for the steps o No clear picture about how things work ? Hard to figure out what to look for ? Not easy to figure out how to get started with the platform ? Top-down approach for docs? ? More details about things like mirror registry would be good * Probably already available, need to be referenced better * Have example use cases and structure docs based on that o In some cases a mirror for packages is needed due to bad connectivity o Troubleshooting guide for installation would be needed ? Use the mailing list entries and questions from other forums to list as FAQ entries in the guide ? Everyone is encouraged to contribute to the FAQ/Troubleshooting guide * ... ? Helping users and contributors * Communication o How to make time for better communication? ? Office hours? * Describe the idea and send out to the ML for the community to decide on the details - AI: Steve Geary to send out the mail to continue the discussion ? Mailing list ? IRC * Also encourage people to participate and contribute o Sharing experience on the mailing list and meetings, etc * ... ? Information sharing * Blog o https://www.starlingx.io/blog/ o Content ideas ? Release intro - combine it with the release notes activity * Get technical information from contributors and use that info for blog posts ? Case studies and blogs from users * Events o Volunteers - Greg is interested in speaking opportunities * ... * Other big issues ??? ? Review team structure and update as needed * Pain points / problems / ideas for improvements??? * Bitergia numbers ? Git data * Since project launch o 49 repos o 267 authors o 9807 commits o 12 identified organizations ? Wind River ~67% ? Intel ~27% ? Unknown ~3% ? FiberHome ~1% * During the 5.0 release cycle o 41 repos o 103 authors o 1092 commits o 4 identified organizations ? Wind River ~88% ? Intel ~8% ? Unknown ~2.9% ? OIF ~1% ? 99Cloud ~0.1% ? Mailing list * Since project launch o 11111 emails o 339 senders o 18 identified organizations * During the 5.0 release cycle o 1806 emails o 108 senders o 8 identified organizations ? Gerrit * Contributors since launch: 385 * Contributors during the 5.0 release cycle: 146 * 6.0 proposed features * Please add your new features here with your name * * Matt Peters / Bart Wensley / Greg Waines * ( https://drive.google.com/drive/folders/1-DiBQwRWG4bCWnb24Eci4YHQxeg0aN5d ) ? Container Component Upgrade [ MATT ] * intent is to update K8S to 1.21 (from 1.18) and related components o Containerd 1.4.4, etcd 3.4.13, calico 3.18, multus 3.7.1, SR-IOV CNI 3.3.1, CNI plugins 0.9.1 * Upgrading K8S requires moving from one version to the next in sequence * VIM orchestration will need to be changed to perform the incremental updates ? Distributed Cloud Scaling [ BART ] * In 5.0 Dist Cloud supports Simplex, Duplex and Standard sub-clouds up to 200 AIO-Simplex subclouds * For 6.0 the plan is to scale the number of sub-clouds to 1000, taking an incremental approach (e.g.400, 600, 800, ...) * Scaling will be done with virtual nodes e.g. AWS. Goal is to identify bottlenecks and work incrementally o Could use Intel IOTG Devcloud for some of this testing as well * Central controller may need 512GB RAM and 80 cores. More is better :) ? Platform Service Migration to Cert-Manager (completion of prep feature) [ GREG ] * Augment the current manual cert manual process into a more automated process * Use case is that the user wants certs to be rooted on their own external rootCA and wants cert-manager to renew certs frequently * Solution is to use cert-manager with ca type cert-manager issuer * Spec was approved for 5.0, this is the completion of that prep work. ? K8S Root CA Update procedure, APIs/CLIs and orchestration [ GREG ] * Spec for this was written and approved in 5.0 * K8S-Root-CA is used to sign many certs across STX including kube-apiserver endpoint * Greg shared an incredibly complex slide that describes the process in deep detail ? Support for linux auditd [ GREG ] * User is requesting we run auditd across the hosts to gather OS level info