From shuicheng.lin at intel.com Sun Dec 1 07:11:31 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Sun, 1 Dec 2019 07:11:31 +0000 Subject: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C76608ED666@SHSMSX105.ccr.corp.intel.com> Hi Anirudh, Per my understanding, for duplex, there is always 1 active node and 1 standby node. The "active/active" or "active/standby" in the document is for "services", not for node. If you try to run "sudo sm-dump" in the standby node, you will find some services are "active", while some services are "standby". For the 2nd question, VMs are running in compute node. And for duplex, both controller nodes are compute nodes also. And the "Active/Standby" is for controller, not for compute function, that is why VMs will run in both node. Best Regards Shuicheng From: Anirudh Gupta Sent: Thursday, November 28, 2019 11:45 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, Can someone please give me any update on my Query: I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html * Can someone please suggest the steps to configure Active-Active state? * I have 2 kubernetes pods corresponding to each Openstack Service in Duplex Setup and when I spawn any VM, it goes on any of the two controllers. So, is this the standard implementation in StarlingX? What needs to be done in Active-Standby configuration? * And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta From: Anirudh Gupta Sent: 26 November 2019 10:30 To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html Can someone please suggest the steps to configure Active-Active state? And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Sun Dec 1 09:10:52 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Sun, 01 Dec 2019 03:10:52 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <45bbe6$6jokkn@orsmga002.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191201T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From shuicheng.lin at intel.com Mon Dec 2 02:14:50 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 2 Dec 2019 02:14:50 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting In-Reply-To: References: <9700A18779F35F49AF027300A49E7C76608E9D6B@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608F54CC@SHSMSX105.ccr.corp.intel.com> Hi Frank, I try to run busybox with kata containers by k8s, and it could run successfully in IPv6 environment. Best Regards Shuicheng From: Miller, Frank Sent: Saturday, November 30, 2019 4:03 AM To: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the update. It looks like stx-openstack has not yet been tested with IPv6. But we have been testing IPv6 with kubernetes platform only and simple k8s apps. Can you confirm kata containers is working with IPv6 when stx-openstack is not applied/not used? Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, November 29, 2019 12:48 AM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I created below LP for the IPv6 deployment issue I meet. Could you help check whether IPv6 deployment is verfied before and share me the BKM for it if there is? Thanks. https://bugs.launchpad.net/starlingx/+bug/1854316 Best Regards Shuicheng From: Miller, Frank > Sent: Tuesday, November 26, 2019 11:37 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting Abbreviated minutes: Next meeting: Tuesday Dec 10 Minutes: 1. Stx.3.0 gating LPs: * Plan for the current 18 gating LPs: * 4 LPs are expected to land for stx.3.0 including the 2 Highs * 2 LPs to be marked invalid/not reproducible * 11 LPs to be re-gated to stx.4.0 * 1 LP TBD (Erich Cordoba to update 1824881) 2. Stx.4.0 features: In features: * 2006145: Kata container support [Shuicheng Lin] --> resourced and In for stx.4.0 * 2006537: Decouple Container Applications from Platform [Bob Church] --> resourced and In for stx.4.0 * 2006770: Backup & Restore - openstack [Ovidiu Poncea] --> resourced and In for stx.4.0 * 2005312: Containerize Openstack clients --> In for now but requires plan * TBD: Upversion Kubernetes and container platform components --> haven't create SB yet but will be required during stx.4.0 NOT In features: * 2006787: Smaller memory node support [Austin Sun] --> not committed for stx.4.0 but being worked on for stx.4.0 (ie: prep) * 2004008: Fault Containerization --> not In because it requires splitting GUI plugin into 2: one with shared panels, the other with the platform panels which is not resourced Etherpad with full minutes: https://etherpad.openstack.org/p/stx-containerization Frank -----Original Appointment----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, November 25, 2019 3:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Containerization Meeting When: Tuesday, November 26, 2019 9:30 AM-10:00 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 Please join me for the bi-weekly containers meeting. Agenda for November 26 meeting: 1. stx.3.0 gating work items: 18 gating LPs (down from 26 at our last meeting) * Status update for high priority LPs (2): * https://bugs.launchpad.net/starlingx/+bug/1838659 kubernetes apiserver certificate needs rotation [Mingyuan Qi] * https://bugs.launchpad.net/starlingx/+bug/1851287 Controller failed to lock following a failover due to elastic pod failure to shutdown [Dan Voiculeasa] * Medium priority LPs (16): * Status for the 4 LPs < 50 days old: * https://bugs.launchpad.net/starlingx/+bug/1851294 [Angie Wang] * https://bugs.launchpad.net/starlingx/+bug/1850438 [Steve Webster] * https://bugs.launchpad.net/starlingx/+bug/1850189 [Stefan Dinescu] * https://bugs.launchpad.net/starlingx/+bug/1846829 [David Sullivan] * Status update for the 12 LPs that >100 days old. [Al, Angie, Bart, Erich, JimG, Ran, Shuicheng, Tao] * Can any be closed as not reproducible or won't fix? * Which ones are being actively worked on? Which ones do the owners have a plan to fix? 2. stx.4.0 planning: * 2006145: Kata container support [Shuicheng Lin] - Request update from Shuicheng if final 2 test scenarios are done (IPv6 testing + external registry with username/pwd authentication) * 2006787: Smaller memory node support, aka containerize flock services [Austin Sun] - Request feature approach & spec update * 2006537: Decouple Container Applications from Platform (stx.4.0 feature) [Bob Church] - Feature status update * Other potential stx.4.0 features --> which are resourced/have plans to address in stx.4.0? * 2006770: Backup & Restore - openstack [Ovidiu Poncea] * 2005312: Containerize Openstack clients * 2004008: Fault Containerization * TBD: Upversion Kubernetes and container platform components Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers << File: ATT00002.txt >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Mon Dec 2 04:52:34 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 2 Dec 2019 04:52:34 +0000 Subject: [Starlingx-discuss] the way to enable swift in starlingx Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628A95E@CDSMSX102.ccr.corp.intel.com> Hi system application-upload system service-parameter-modify radosgw config service_enabled=true system service-parameter-apply radosgw system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true system application-apply stx-openstack openstack endpoint list | grep object BR! -----Original Message----- From: Chen, Haochuan Z Sent: Thursday, November 28, 2019 10:12 AM To: starlingx-discuss at lists.starlingx.io Cc: Volker.Hoesslin at swsn.de; ji at sibyl.li Subject: swift enabling Hi I find there is voice to enable swift, why it was removed? Thanks! Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Message: 3 Date: Fri, 22 Nov 2019 14:14:15 +0000 From: "von Hoesslin, Volker" To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] STX 2.0: swift? Message-ID: Content-Type: text/plain; charset="iso-8859-1" hi, how can i add the object storage (swift) feature to my current STX2.0 openstack? BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io Sent: Tuesday, November 26, 2019 12:05 AM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 18, Issue 150 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. Re: controller filesystem (Waines, Greg) 2. Enabling Object Storage (joji vlogs) 3. Re: StarlingX 2.0 Account Locked for User (Andy Ning) ---------------------------------------------------------------------- Message: 1 Date: Mon, 25 Nov 2019 13:26:31 +0000 From: "Waines, Greg" To: Saul Wold , "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Message-ID: Content-Type: text/plain; charset="utf-8" Actually a section on managing filesystems on the controllers is planned for STX 3.0 ... based on the proposed TOC for the new Operations Guide. Greg From: Saul Wold Date: Thursday, November 21, 2019 at 12:13 PM To: "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Kristal: We probably need a Story/Task added to the documentation to get the documentation of host-fs added in the right place. This helps with filesystem re-sizing. Maybe it's already there. Sau! On 11/21/19 7:12 AM, von Hoesslin, Volker wrote: incredible !!! thats the point i have search! big thx! ------------------------------------------------------------------------ *Von:* Sun, Austin [austin.sun at intel.com] *Gesendet:* Donnerstag, 21. November 2019 15:20 *An:* von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io *Betreff:* Re: [Starlingx-discuss] controller filesystem Hi Volker: From the email chain, http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005656.html You can probably the command line to change the size of docker lv Thanks. BR Austin Sun. *From:* von Hoesslin, Volker > *Sent:* Thursday, November 21, 2019 9:20 PM *To:* starlingx-discuss at lists.starlingx.io *Subject:* [Starlingx-discuss] controller filesystem hi, i trying to import some existing qcow2 images into my new installed STX 2.0. openstack image create --file /media/foobar.qcow2 --private --unprotected --disk-format qcow2 "foobar" all works fine, the image are available. now, i'm trying to create an new volume based on this images. if the images are <=6GB all works fine, but some images are very huge (10-200GB) and then it ends in an error. after some research i can see on controller the mount point /dev/mapper/cgts--vg-docker--lv 30G 11G 20G 35% /var/lib/docker increase the used storage. after fail "volume create" it goes back to given value 35%. in my oppinion, this mount point should resize to some value about 300-400GB, but how? in STX horizon backend (http://10.10.10.2:8080/admin/system_config/?tab=system_config_tab__storage_table) there is an "docker-distribution", but no docker-mount-point itself? btw, if i try to change the "docker-distribution" value to some other value (eg. 500GB via horizon backend), i got this error: *Error: *backup size of 60 is insufficient for host controller-1. Minimum backup size of 100 is required based upon glance size 20 and database size 20. Rejecting modification request. - see attachment - how can i increase the backup size to handle this error?! greez & thx, volker... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Austin Gillmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 25 Nov 2019 11:05:19 -0500 From: Andy Ning To: Subject: Re: [Starlingx-discuss] StarlingX 2.0 Account Locked for User Message-ID: <8f8e6cad-0ad7-4be4-6693-a25e6cec93ad at windriver.com> Content-Type: text/plain; charset="utf-8"; format=flowed On 2019-11-25 12:00 AM, Yong Hu wrote: > Hi Anirudh, > > This issue is similar to this LP#1853017 [0], which was triggered the > account locking if the password of Openstack user "admin" was changed. > In this LP, for unknown reason, "registry-token-server" daemon kept > accessing "keystone" on host (not the instance in containers) with > obsolete token and led to "admin" account locked after 5 attempts. > If registry token server keeps on accessing keystone with obsolete token, this could be a bug in the token server. Normally if a keystone client get a failed authentication, it should try to retrieve a new token by using username/password. Andy > Right now, I am debugging this issue. > Good thing is Cengn build 11/16 seemed not to have such a problem > (LP#1853017). > > You might have a try with this version? > BTW: which cengn build were you using? > > > [0] https://bugs.launchpad.net/starlingx/+bug/1853017 > > regards, > Yong > > On 2019/11/19 6:06 PM, Anirudh Gupta wrote: >> Hi Team, >> >> I have installed StarlingX 2.0 Duplex Bare Metal. >> >> I am trying to create 2 VM’s and repeating this cycle a number of times. >> >> After using the setup for around half and hour, I am not longer able >> to access the GUI. >> >> Ever though I type correct Username/Password, it gives an error of >> Invalid Credentials. >> >> Then, I thought to use the CLI commands by following LOAD CLI section >> given in link >> >> https://docs.starlingx.io/deploy_install_guides/r2_release/openstack/ >> access.html#local-cli >> >> >> But with this also, I am facing the same error >> >> controller-0:~$ export OS_CLOUD=openstack_helm >> >> controller-0:~$ openstack endpoint list >> >> The account is locked for user: 230578cde382430a8adac399afab1230. >> (HTTP 401) (Request-ID: req-6da6d59a-2edd-4f2b-a8bf-f13f2e423a77) >> >> Earlier this automatically started working after 5-7 mins. >> >> But this time, I am completely Blocked. >> >> I have also raised a bug regarding the same >> >> https://bugs.launchpad.net/starlingx/+bug/1853093 >> >> Please suggest some pointers, so that I can unblock and resume my >> activities. >> >> Regards >> >> Anirudh Gupta >> >> DISCLAIMER: This electronic message and all of its contents, contains >> information which is privileged, confidential or otherwise protected >> from disclosure. The information contained in this electronic mail >> transmission is intended for use only by the individual or entity to >> which it is addressed. If you are not the intended recipient or may >> have received this electronic mail transmission in error, please >> notify the sender immediately and delete / destroy all copies of this >> electronic mail transmission without disclosing, copying, >> distributing, forwarding, printing or retaining any part of it. >> Hughes Systique accepts no responsibility for loss or damage arising >> from the use of the information transmitted by this email including >> damage from virus. >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 18, Issue 150 ************************************************** From haochuan.z.chen at intel.com Mon Dec 2 07:14:57 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 2 Dec 2019 07:14:57 +0000 Subject: [Starlingx-discuss] ceph ops enabling in sysinv-conductor Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628A993@CDSMSX102.ccr.corp.intel.com> Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Mon Dec 2 09:04:13 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Mon, 02 Dec 2019 03:04:13 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: List of docker images required for "platform-integ-apps": BUILD_ID="20191202T000000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Volker.Hoesslin at swsn.de Mon Dec 2 12:59:55 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Mon, 2 Dec 2019 12:59:55 +0000 Subject: [Starlingx-discuss] the way to enable swift in starlingx In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D85628A95E@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D85628A95E@CDSMSX102.ccr.corp.intel.com> Message-ID: hi, thx for this code-snippets but doesnt work for me :( there are no errors but also no new endpoint... controller-0:~$ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-modify radosgw config service_enabled=true +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | ee200d06-d800-4dfa-83aa-b35d1fde61f6 | | service | radosgw | | section | config | | name | service_enabled | | value | true | | personality | None | | resource | None | +-------------+--------------------------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-apply radosgw Applying radosgw service parameters [sysadmin at controller-0 ~(keystone_admin)]$ system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true +------------+--------------------+ | Property | Value | +------------+--------------------+ | attributes | {u'enabled': True} | | name | ceph-rgw | | namespace | openstack | +------------+--------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | True | | app_version | 1.0-17-centos-stable-latest | | created_at | 2019-11-18T16:21:19.076937+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2019-11-28T16:37:31.859690+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. [sysadmin at controller-0 ~(keystone_admin)]$ watch system application-show stx-openstack [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list | grep object [sysadmin at controller-0 ~(keystone_admin)]$ [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | f279f864c46e469cafa16fd77d0605b0 | RegionOne | fm | faultmanagement | True | admin | http://192.168.204.2:18002 | | aad37714717540e49af212768daf9258 | RegionOne | fm | faultmanagement | True | internal | http://192.168.204.2:18002 | | 7a7a980041e94eda8e0575ec7f6472c2 | RegionOne | fm | faultmanagement | True | public | http://10.10.10.2:18002 | | d7765db9214249e2ae97958043521061 | RegionOne | patching | patching | True | admin | http://192.168.204.2:5491 | | 678a5de6ff9044afacd145fbdad234a1 | RegionOne | patching | patching | True | internal | http://192.168.204.2:5491 | | 2c6e5794267844f4af577bc71183e0f7 | RegionOne | patching | patching | True | public | http://10.10.10.2:15491 | | b21e663dd3894ebb8465b4b8418f2a47 | RegionOne | vim | nfv | True | admin | http://192.168.204.2:4545 | | 185182d4b1e740ee9e6311a55791fed7 | RegionOne | vim | nfv | True | internal | http://192.168.204.2:4545 | | 63d6b149358b475eb97d97bd3414048e | RegionOne | vim | nfv | True | public | http://10.10.10.2:4545 | | 2cc7741368cd4ca1920322f601ece48c | RegionOne | smapi | smapi | True | admin | http://192.168.204.2:7777 | | 9bd331db8f9d455ab06ef7dc4dc79660 | RegionOne | smapi | smapi | True | internal | http://192.168.204.2:7777 | | bae82a84d6e94cd198ec2c73b2dba2c0 | RegionOne | smapi | smapi | True | public | http://10.10.10.2:7777 | | 594394a663c64fe484c097fcfbf8b2db | RegionOne | keystone | identity | True | admin | http://192.168.204.2:5000/v3 | | 3804a32d209a4a929cff8321c408fe5e | RegionOne | keystone | identity | True | internal | http://192.168.204.2:5000/v3 | | 69f30354ea1c407480de2a70719f65d5 | RegionOne | keystone | identity | True | public | http://10.10.10.2:5000/v3 | | 0a5686180b104c708e741048a5ddca86 | RegionOne | barbican | key-manager | True | admin | http://192.168.204.2:9311 | | b8243821ca9443b2ab1036a35157ba73 | RegionOne | barbican | key-manager | True | internal | http://192.168.204.2:9311 | | d7284357f1374940bee22a6207152f39 | RegionOne | barbican | key-manager | True | public | http://10.10.10.2:9311 | | da935387386442b995b1812b307e228a | RegionOne | sysinv | platform | True | admin | http://192.168.204.2:6385/v1 | | d135073f7952407fbccc113e2ebfc296 | RegionOne | sysinv | platform | True | internal | http://192.168.204.2:6385/v1 | | b54be3ccb9804af0bfc579a12ea5afcc | RegionOne | sysinv | platform | True | public | http://10.10.10.2:6385/v1 | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ any suggestions? greez & thx, volker... ________________________________________ Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] Gesendet: Montag, 2. Dezember 2019 05:52 An: von Hoesslin, Volker; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: the way to enable swift in starlingx Hi system application-upload system service-parameter-modify radosgw config service_enabled=true system service-parameter-apply radosgw system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true system application-apply stx-openstack openstack endpoint list | grep object BR! -----Original Message----- From: Chen, Haochuan Z Sent: Thursday, November 28, 2019 10:12 AM To: starlingx-discuss at lists.starlingx.io Cc: Volker.Hoesslin at swsn.de; ji at sibyl.li Subject: swift enabling Hi I find there is voice to enable swift, why it was removed? Thanks! Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Message: 3 Date: Fri, 22 Nov 2019 14:14:15 +0000 From: "von Hoesslin, Volker" To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] STX 2.0: swift? Message-ID: Content-Type: text/plain; charset="iso-8859-1" hi, how can i add the object storage (swift) feature to my current STX2.0 openstack? BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io Sent: Tuesday, November 26, 2019 12:05 AM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 18, Issue 150 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. Re: controller filesystem (Waines, Greg) 2. Enabling Object Storage (joji vlogs) 3. Re: StarlingX 2.0 Account Locked for User (Andy Ning) ---------------------------------------------------------------------- Message: 1 Date: Mon, 25 Nov 2019 13:26:31 +0000 From: "Waines, Greg" To: Saul Wold , "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Message-ID: Content-Type: text/plain; charset="utf-8" Actually a section on managing filesystems on the controllers is planned for STX 3.0 ... based on the proposed TOC for the new Operations Guide. Greg From: Saul Wold Date: Thursday, November 21, 2019 at 12:13 PM To: "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Kristal: We probably need a Story/Task added to the documentation to get the documentation of host-fs added in the right place. This helps with filesystem re-sizing. Maybe it's already there. Sau! On 11/21/19 7:12 AM, von Hoesslin, Volker wrote: incredible !!! thats the point i have search! big thx! ------------------------------------------------------------------------ *Von:* Sun, Austin [austin.sun at intel.com] *Gesendet:* Donnerstag, 21. November 2019 15:20 *An:* von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io *Betreff:* Re: [Starlingx-discuss] controller filesystem Hi Volker: From the email chain, http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005656.html You can probably the command line to change the size of docker lv Thanks. BR Austin Sun. *From:* von Hoesslin, Volker > *Sent:* Thursday, November 21, 2019 9:20 PM *To:* starlingx-discuss at lists.starlingx.io *Subject:* [Starlingx-discuss] controller filesystem hi, i trying to import some existing qcow2 images into my new installed STX 2.0. openstack image create --file /media/foobar.qcow2 --private --unprotected --disk-format qcow2 "foobar" all works fine, the image are available. now, i'm trying to create an new volume based on this images. if the images are <=6GB all works fine, but some images are very huge (10-200GB) and then it ends in an error. after some research i can see on controller the mount point /dev/mapper/cgts--vg-docker--lv 30G 11G 20G 35% /var/lib/docker increase the used storage. after fail "volume create" it goes back to given value 35%. in my oppinion, this mount point should resize to some value about 300-400GB, but how? in STX horizon backend (http://10.10.10.2:8080/admin/system_config/?tab=system_config_tab__storage_table) there is an "docker-distribution", but no docker-mount-point itself? btw, if i try to change the "docker-distribution" value to some other value (eg. 500GB via horizon backend), i got this error: *Error: *backup size of 60 is insufficient for host controller-1. Minimum backup size of 100 is required based upon glance size 20 and database size 20. Rejecting modification request. - see attachment - how can i increase the backup size to handle this error?! greez & thx, volker... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Austin Gillmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 25 Nov 2019 11:05:19 -0500 From: Andy Ning To: Subject: Re: [Starlingx-discuss] StarlingX 2.0 Account Locked for User Message-ID: <8f8e6cad-0ad7-4be4-6693-a25e6cec93ad at windriver.com> Content-Type: text/plain; charset="utf-8"; format=flowed On 2019-11-25 12:00 AM, Yong Hu wrote: > Hi Anirudh, > > This issue is similar to this LP#1853017 [0], which was triggered the > account locking if the password of Openstack user "admin" was changed. > In this LP, for unknown reason, "registry-token-server" daemon kept > accessing "keystone" on host (not the instance in containers) with > obsolete token and led to "admin" account locked after 5 attempts. > If registry token server keeps on accessing keystone with obsolete token, this could be a bug in the token server. Normally if a keystone client get a failed authentication, it should try to retrieve a new token by using username/password. Andy > Right now, I am debugging this issue. > Good thing is Cengn build 11/16 seemed not to have such a problem > (LP#1853017). > > You might have a try with this version? > BTW: which cengn build were you using? > > > [0] https://bugs.launchpad.net/starlingx/+bug/1853017 > > regards, > Yong > > On 2019/11/19 6:06 PM, Anirudh Gupta wrote: >> Hi Team, >> >> I have installed StarlingX 2.0 Duplex Bare Metal. >> >> I am trying to create 2 VM’s and repeating this cycle a number of times. >> >> After using the setup for around half and hour, I am not longer able >> to access the GUI. >> >> Ever though I type correct Username/Password, it gives an error of >> Invalid Credentials. >> >> Then, I thought to use the CLI commands by following LOAD CLI section >> given in link >> >> https://docs.starlingx.io/deploy_install_guides/r2_release/openstack/ >> access.html#local-cli >> >> >> But with this also, I am facing the same error >> >> controller-0:~$ export OS_CLOUD=openstack_helm >> >> controller-0:~$ openstack endpoint list >> >> The account is locked for user: 230578cde382430a8adac399afab1230. >> (HTTP 401) (Request-ID: req-6da6d59a-2edd-4f2b-a8bf-f13f2e423a77) >> >> Earlier this automatically started working after 5-7 mins. >> >> But this time, I am completely Blocked. >> >> I have also raised a bug regarding the same >> >> https://bugs.launchpad.net/starlingx/+bug/1853093 >> >> Please suggest some pointers, so that I can unblock and resume my >> activities. >> >> Regards >> >> Anirudh Gupta >> >> DISCLAIMER: This electronic message and all of its contents, contains >> information which is privileged, confidential or otherwise protected >> from disclosure. The information contained in this electronic mail >> transmission is intended for use only by the individual or entity to >> which it is addressed. If you are not the intended recipient or may >> have received this electronic mail transmission in error, please >> notify the sender immediately and delete / destroy all copies of this >> electronic mail transmission without disclosing, copying, >> distributing, forwarding, printing or retaining any part of it. >> Hughes Systique accepts no responsibility for loss or damage arising >> from the use of the information transmitted by this email including >> damage from virus. >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 18, Issue 150 ************************************************** From ildiko.vancsa at gmail.com Mon Dec 2 15:13:43 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 2 Dec 2019 10:13:43 -0500 Subject: [Starlingx-discuss] Open Infrastructure Summit and PTG - StarlingX recap Message-ID: <76BA25EB-C590-416A-AAD1-40C00F5402B2@gmail.com> Hi StarlingX Community, I wrote up a blog post about the StarlingX related activities at the Open Infrastructure Summit and PTG that was held a few weeks ago in Shanghai so those of you who couldn’t join can get a taste of it and as a reminder to those who were there. You can check the blog post here: https://www.starlingx.io/blog/starlingx-shanghai-recap.html Happy reading. :) Thanks, Ildikó From haochuan.z.chen at intel.com Mon Dec 2 15:26:29 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 2 Dec 2019 15:26:29 +0000 Subject: [Starlingx-discuss] the way to enable swift in starlingx In-Reply-To: References: <56829C2A36C2E542B0CCB9854828E4D85628A95E@CDSMSX102.ccr.corp.intel.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628AABD@CDSMSX102.ccr.corp.intel.com> You should check in such way [sysadmin at controller-0 ~(keystone_admin)]$ openstack --os-username 'admin' --os-password 'Local.123' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-identity-api-version 3 --os-interface internal --os-region-name RegionOne endpoint list | grep object | a67918db88604bcc87504c3ec72c745c | RegionOne | swift | object-store | True | public | http://10.10.10.3:7480/swift/v1 | | d0974532671d457995ccd5c8e2c5f5eb | RegionOne | swift | object-store | True | admin | http://192.168.204.1:7480/swift/v1 | | e096598cdbec45878240aa5ac75e2047 | RegionOne | swift | object-store | True | internal | http://192.168.204.1:7480/swift/v1 | [sysadmin at controller-0 ~(keystone_admin)]$ BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: von Hoesslin, Volker Sent: Monday, December 2, 2019 9:00 PM To: Chen, Haochuan Z ; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io' ; Hu, Yong Subject: AW: the way to enable swift in starlingx hi, thx for this code-snippets but doesnt work for me :( there are no errors but also no new endpoint... controller-0:~$ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-modify radosgw config service_enabled=true +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | ee200d06-d800-4dfa-83aa-b35d1fde61f6 | | service | radosgw | | section | config | | name | service_enabled | | value | true | | personality | None | | resource | None | +-------------+--------------------------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-apply radosgw Applying radosgw service parameters [sysadmin at controller-0 ~(keystone_admin)]$ system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true +------------+--------------------+ | Property | Value | +------------+--------------------+ | attributes | {u'enabled': True} | | name | ceph-rgw | | namespace | openstack | +------------+--------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | True | | app_version | 1.0-17-centos-stable-latest | | created_at | 2019-11-18T16:21:19.076937+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2019-11-28T16:37:31.859690+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. [sysadmin at controller-0 ~(keystone_admin)]$ watch system application-show stx-openstack [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list | grep object [sysadmin at controller-0 ~(keystone_admin)]$ [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | f279f864c46e469cafa16fd77d0605b0 | RegionOne | fm | faultmanagement | True | admin | http://192.168.204.2:18002 | | aad37714717540e49af212768daf9258 | RegionOne | fm | faultmanagement | True | internal | http://192.168.204.2:18002 | | 7a7a980041e94eda8e0575ec7f6472c2 | RegionOne | fm | faultmanagement | True | public | http://10.10.10.2:18002 | | d7765db9214249e2ae97958043521061 | RegionOne | patching | patching | True | admin | http://192.168.204.2:5491 | | 678a5de6ff9044afacd145fbdad234a1 | RegionOne | patching | patching | True | internal | http://192.168.204.2:5491 | | 2c6e5794267844f4af577bc71183e0f7 | RegionOne | patching | patching | True | public | http://10.10.10.2:15491 | | b21e663dd3894ebb8465b4b8418f2a47 | RegionOne | vim | nfv | True | admin | http://192.168.204.2:4545 | | 185182d4b1e740ee9e6311a55791fed7 | RegionOne | vim | nfv | True | internal | http://192.168.204.2:4545 | | 63d6b149358b475eb97d97bd3414048e | RegionOne | vim | nfv | True | public | http://10.10.10.2:4545 | | 2cc7741368cd4ca1920322f601ece48c | RegionOne | smapi | smapi | True | admin | http://192.168.204.2:7777 | | 9bd331db8f9d455ab06ef7dc4dc79660 | RegionOne | smapi | smapi | True | internal | http://192.168.204.2:7777 | | bae82a84d6e94cd198ec2c73b2dba2c0 | RegionOne | smapi | smapi | True | public | http://10.10.10.2:7777 | | 594394a663c64fe484c097fcfbf8b2db | RegionOne | keystone | identity | True | admin | http://192.168.204.2:5000/v3 | | 3804a32d209a4a929cff8321c408fe5e | RegionOne | keystone | identity | True | internal | http://192.168.204.2:5000/v3 | | 69f30354ea1c407480de2a70719f65d5 | RegionOne | keystone | identity | True | public | http://10.10.10.2:5000/v3 | | 0a5686180b104c708e741048a5ddca86 | RegionOne | barbican | key-manager | True | admin | http://192.168.204.2:9311 | | b8243821ca9443b2ab1036a35157ba73 | RegionOne | barbican | key-manager | True | internal | http://192.168.204.2:9311 | | d7284357f1374940bee22a6207152f39 | RegionOne | barbican | key-manager | True | public | http://10.10.10.2:9311 | | da935387386442b995b1812b307e228a | RegionOne | sysinv | platform | True | admin | http://192.168.204.2:6385/v1 | | d135073f7952407fbccc113e2ebfc296 | RegionOne | sysinv | platform | True | internal | http://192.168.204.2:6385/v1 | | b54be3ccb9804af0bfc579a12ea5afcc | RegionOne | sysinv | platform | True | public | http://10.10.10.2:6385/v1 | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ any suggestions? greez & thx, volker... ________________________________________ Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] Gesendet: Montag, 2. Dezember 2019 05:52 An: von Hoesslin, Volker; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: the way to enable swift in starlingx Hi system application-upload system service-parameter-modify radosgw config service_enabled=true system service-parameter-apply radosgw system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true system application-apply stx-openstack openstack endpoint list | grep object BR! -----Original Message----- From: Chen, Haochuan Z Sent: Thursday, November 28, 2019 10:12 AM To: starlingx-discuss at lists.starlingx.io Cc: Volker.Hoesslin at swsn.de; ji at sibyl.li Subject: swift enabling Hi I find there is voice to enable swift, why it was removed? Thanks! Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Message: 3 Date: Fri, 22 Nov 2019 14:14:15 +0000 From: "von Hoesslin, Volker" To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] STX 2.0: swift? Message-ID: Content-Type: text/plain; charset="iso-8859-1" hi, how can i add the object storage (swift) feature to my current STX2.0 openstack? BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io Sent: Tuesday, November 26, 2019 12:05 AM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 18, Issue 150 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. Re: controller filesystem (Waines, Greg) 2. Enabling Object Storage (joji vlogs) 3. Re: StarlingX 2.0 Account Locked for User (Andy Ning) ---------------------------------------------------------------------- Message: 1 Date: Mon, 25 Nov 2019 13:26:31 +0000 From: "Waines, Greg" To: Saul Wold , "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Message-ID: Content-Type: text/plain; charset="utf-8" Actually a section on managing filesystems on the controllers is planned for STX 3.0 ... based on the proposed TOC for the new Operations Guide. Greg From: Saul Wold Date: Thursday, November 21, 2019 at 12:13 PM To: "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Kristal: We probably need a Story/Task added to the documentation to get the documentation of host-fs added in the right place. This helps with filesystem re-sizing. Maybe it's already there. Sau! On 11/21/19 7:12 AM, von Hoesslin, Volker wrote: incredible !!! thats the point i have search! big thx! ------------------------------------------------------------------------ *Von:* Sun, Austin [austin.sun at intel.com] *Gesendet:* Donnerstag, 21. November 2019 15:20 *An:* von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io *Betreff:* Re: [Starlingx-discuss] controller filesystem Hi Volker: From the email chain, http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005656.html You can probably the command line to change the size of docker lv Thanks. BR Austin Sun. *From:* von Hoesslin, Volker > *Sent:* Thursday, November 21, 2019 9:20 PM *To:* starlingx-discuss at lists.starlingx.io *Subject:* [Starlingx-discuss] controller filesystem hi, i trying to import some existing qcow2 images into my new installed STX 2.0. openstack image create --file /media/foobar.qcow2 --private --unprotected --disk-format qcow2 "foobar" all works fine, the image are available. now, i'm trying to create an new volume based on this images. if the images are <=6GB all works fine, but some images are very huge (10-200GB) and then it ends in an error. after some research i can see on controller the mount point /dev/mapper/cgts--vg-docker--lv 30G 11G 20G 35% /var/lib/docker increase the used storage. after fail "volume create" it goes back to given value 35%. in my oppinion, this mount point should resize to some value about 300-400GB, but how? in STX horizon backend (http://10.10.10.2:8080/admin/system_config/?tab=system_config_tab__storage_table) there is an "docker-distribution", but no docker-mount-point itself? btw, if i try to change the "docker-distribution" value to some other value (eg. 500GB via horizon backend), i got this error: *Error: *backup size of 60 is insufficient for host controller-1. Minimum backup size of 100 is required based upon glance size 20 and database size 20. Rejecting modification request. - see attachment - how can i increase the backup size to handle this error?! greez & thx, volker... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Austin Gillmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 25 Nov 2019 11:05:19 -0500 From: Andy Ning To: Subject: Re: [Starlingx-discuss] StarlingX 2.0 Account Locked for User Message-ID: <8f8e6cad-0ad7-4be4-6693-a25e6cec93ad at windriver.com> Content-Type: text/plain; charset="utf-8"; format=flowed On 2019-11-25 12:00 AM, Yong Hu wrote: > Hi Anirudh, > > This issue is similar to this LP#1853017 [0], which was triggered the > account locking if the password of Openstack user "admin" was changed. > In this LP, for unknown reason, "registry-token-server" daemon kept > accessing "keystone" on host (not the instance in containers) with > obsolete token and led to "admin" account locked after 5 attempts. > If registry token server keeps on accessing keystone with obsolete token, this could be a bug in the token server. Normally if a keystone client get a failed authentication, it should try to retrieve a new token by using username/password. Andy > Right now, I am debugging this issue. > Good thing is Cengn build 11/16 seemed not to have such a problem > (LP#1853017). > > You might have a try with this version? > BTW: which cengn build were you using? > > > [0] https://bugs.launchpad.net/starlingx/+bug/1853017 > > regards, > Yong > > On 2019/11/19 6:06 PM, Anirudh Gupta wrote: >> Hi Team, >> >> I have installed StarlingX 2.0 Duplex Bare Metal. >> >> I am trying to create 2 VM's and repeating this cycle a number of times. >> >> After using the setup for around half and hour, I am not longer able >> to access the GUI. >> >> Ever though I type correct Username/Password, it gives an error of >> Invalid Credentials. >> >> Then, I thought to use the CLI commands by following LOAD CLI section >> given in link >> >> https://docs.starlingx.io/deploy_install_guides/r2_release/openstack/ >> access.html#local-cli >> >> >> But with this also, I am facing the same error >> >> controller-0:~$ export OS_CLOUD=openstack_helm >> >> controller-0:~$ openstack endpoint list >> >> The account is locked for user: 230578cde382430a8adac399afab1230. >> (HTTP 401) (Request-ID: req-6da6d59a-2edd-4f2b-a8bf-f13f2e423a77) >> >> Earlier this automatically started working after 5-7 mins. >> >> But this time, I am completely Blocked. >> >> I have also raised a bug regarding the same >> >> https://bugs.launchpad.net/starlingx/+bug/1853093 >> >> Please suggest some pointers, so that I can unblock and resume my >> activities. >> >> Regards >> >> Anirudh Gupta >> >> DISCLAIMER: This electronic message and all of its contents, contains >> information which is privileged, confidential or otherwise protected >> from disclosure. The information contained in this electronic mail >> transmission is intended for use only by the individual or entity to >> which it is addressed. If you are not the intended recipient or may >> have received this electronic mail transmission in error, please >> notify the sender immediately and delete / destroy all copies of this >> electronic mail transmission without disclosing, copying, >> distributing, forwarding, printing or retaining any part of it. >> Hughes Systique accepts no responsibility for loss or damage arising >> from the use of the information transmitted by this email including >> damage from virus. >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 18, Issue 150 ************************************************** From scott.little at windriver.com Mon Dec 2 15:27:03 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 2 Dec 2019 10:27:03 -0500 Subject: [Starlingx-discuss] r/stx.3.0 In-Reply-To: References: <31b882e3-9c06-69b8-919a-c1bd7cbfe313@windriver.com> Message-ID: We have a successful build of r/stx.3.0 on CENGN.  The branch is now open for submission of your bug fixes. Builds can be found under ... http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/ I have one outstanding detail, setting the SW_VERSION to 19.12.  Expect reviews shortly. Scott On 2019-11-28 9:05 a.m., Scott Little wrote: > It would help if I included a link to all the review wouldn't it. > > https://review.opendev.org/#/q/topic:create-r/stx.3.0+(status:open+OR+status:merged) > > > Scott > > > On 2019-11-28 8:55 a.m., Scott Little wrote: >> All git reviews for creation of branch r/stx.3.0 have been posted. >> >> Scott >> >> >> On 2019-11-27 1:41 p.m., Scott Little wrote: >>> The StarlingX 3.0 release branch (r/stx.3.0) will be created >>> tomorrow (Nov 28) based on tonights 20191128T023000Z CENGN build >>> context (assuming a successful build). >>> >>> Code review primes can expect to see a small code inspection that >>> will modify .gitreview to reflect the new branch an all StarlingX >>> repos.  There will also be a review for the new repo manifest. >>> Please process these reviews quickly. >>> >>> For the master branch, no code freeze is required.  Continue to work >>> as normal.  However I would request that feature content intended >>> for 4.0 be held back a day or two until we have a successful CENGN >>> build of 3.0. >>> >>> Please treat the r/stx.3.0 branch as frozen until I have a >>> successful CENGN build on the new branch.  No gerrit reviews should >>> be approved except for my manifest and .gitreview changes.   I'll >>> send a follow up e-mail to lift the freeze. >>> >>> Fixes for 3.0 gating issues should continue to be delivered to the >>> master branch, and subsequently cherry-picked into the r/stx.3.0 >>> branch. >>> >>> Thanks for your cooperation. >>> >>> Scott Little >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Mon Dec 2 15:29:32 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 2 Dec 2019 10:29:32 -0500 Subject: [Starlingx-discuss] Open Infrastructure Summit and PTG Shanghai - edge recap Message-ID: <97525A01-EB74-4381-BF57-F01E9ACBAA5A@gmail.com> Hi, Similarly to the StarlingX recap you can find here a summary on further edge activities driven by the OSF Edge Computing Group. Please see the content here: http://lists.openstack.org/pipermail/edge-computing/2019-December/000664.html Please let me know if you have any questions or comments. Thanks and Best Regards, Ildikó From erich.cordoba.malibran at intel.com Mon Dec 2 17:13:26 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Mon, 2 Dec 2019 17:13:26 +0000 Subject: [Starlingx-discuss] Problems downloading mirror in centos8 environment In-Reply-To: <000501d5a59d$9e154420$da3fcc60$@neusoft.com> References: <000501d5a59d$9e154420$da3fcc60$@neusoft.com> Message-ID: <2a1960e9b161a32e1f16b3dd6da624d25cefe8b4.camel@intel.com> Hi, I'm not familiar with the centos upgrade procedure, but as I understand you are running dnf from a CentOS 8 system right? In that case, then it seems to be a problem with the repository cache or probably a .repo file in /etc/yum.repos.d/ is missing. Check if you have the correct .repo file pointing to vault.centos.org. Is this the only package with problems for you? -Erich On Thu, 2019-11-28 at 11:40 +0800, 付勇 wrote: > Hi StarlingX team > I’m upgrading starlingx to centos8 and have a question to ask for > help. > > problem: > I cannot download the specified version of srpm through the dnf > download --source command. > > Eg: > About two weeks ago, I was able to download the bash-4.4.19- > 7.el8.src.rpm file through the “dnf download --source bash-4.4.19- > 7.el8” command in centos8. > But now I cannot download the bash-4.4.19-7.el8.src.rpm file with > this command, even if I can download it at > http://vault.centos.org/8.0.1905/BaseOS/Source/SPackages/ . > > Error info: > No package bash-4.4.19-7.el8.src available. > Exiting due to strict setting. > Error: No package bash-4.4.19-7.el8.src available. > > please contact me, If you have any good suggestions. > Thank you > Best Regards > Wish you happy everyday! > -------------------------------- > Yong.Fu- Neusoft > > ------------------------------------------------------------------- > -------------------------------- > Confidentiality Notice: The information contained in this e-mail and > any accompanying attachment(s) > is intended only for the use of the intended recipient and may be > confidential and/or privileged of > Neusoft Corporation, its subsidiaries and/or its affiliates. If any > reader of this communication is > not the intended recipient, unauthorized use, forwarding, printing, > storing, disclosure or copying > is strictly prohibited, and may be unlawful.If you have received this > communication in error,please > immediately notify the sender by return e-mail, and delete the > original message and all copies from > your system. Thank you. > ------------------------------------------------------------------- > -------------------------------- > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Mon Dec 2 17:50:07 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Mon, 2 Dec 2019 17:50:07 +0000 Subject: [Starlingx-discuss] Community Call (Nov 27, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007B362B5@ALA-MBD.corp.ad.wrs.com> Notes from last week's Community Call. This week's call will be hosted by Bruce. Thanks, Bill... - Standing Topics - Gerrit Reviews in Need of Attention - nothing this week - Sanity: any RED since last week? - no - Unanswered Requests for Help on Mailing List - STX 2.0: swift? - Volker von Hoesslin - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007035.html - per Brent, the API is supported as an optional service, backed by Ceph - Yong will get some input from Brent & will revert to Volker (& include Austin re: storage, as appropriate) - DPDK with VPP - Ezpeer Chen - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007063.html - Brent will revert to Ezpeer - Enabling Object Storage - Austin Gillmann - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007053.html - similar to first Q - Yong/Brent/Austin to collaborate - Reduce OpenStack VM Size? - Sai Chandu - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007031.html - Sai Chandu to share some before & after sizes - This Week's Topics - stx.3.0 - No release meeting on 11/28 due to American Thanksgiving and meeting conflicts for WR team in Canada - Reminder: stx.3.0 release branch will be created on 11/28 - Once created, all stx.3.0 gating bugs needs to be merged in master and then cherry-picked by the developer to the release branch - Test teams to schedule sanity and move all stx.3.0 test activities to the stx.3.0 builds. - stx.3.0 Gating Bugs - https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0 - The current count is 61 -- down from 80 last week - Good progress! Let's continue this momentum. - distro.openstack: 14 (was 15) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.distro.openstack - security: 5 (was 14 - first place!) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.security - Jim Somerville & Robin Lu made great progress on CVEs. Actively working on 3 more related to kernel CVEs and 1 related to the OVMF package. - config: 13 (was 11) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.config - containers: 16 (was 24 - nice!) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.containers - networking: 5 (was 5) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.networking - Huifeng - moved some to 4.0, actively working on the 5 remaining - storage: 6 (was 5) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.storage - Austin - actively working, all are medium - distcloud: 2 (no change) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.distcloud - metal: 3 (was 2) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.metal - fault: 1 (no change) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.fault - update: 1 (no change) https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.update - There are 23 bugs that are older than 100 days - These are medium priority bugs that were moved from stx.2.0 to stx.3.0 - Request each sub-project team to review to determine if these are still valid - Some requests sent to the reporters (if part of the test team) for re-test - Ghada: No point in moving these again to stx.4.0 if not addressed by stx.3.0 - Community agrees that moving this set to stx.4.0 is not a good idea. - The current recommendation is to keep them open, but mark them not-gating / low priority. - Regression Test Update: (Requested from Elio) - https://docs.google.com/spreadsheets/d/1X085xI96M6PIeum87w6IEF11G-TG-W1OeGuAJWnrAWI/edit#gid=1717644237 - Making good progress. On track to finish regression first pass execution by 11/29. - stx.4.0 - Reminder about planning email - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007029.html - Release Planning Spreadsheet (see the Release Candidates worksheet) - https://docs.google.com/spreadsheets/d/1a93wt0XO0_JvajnPzQwnqFkXfdDysKVnHpbrEc17_yg/edit#gid=0 - Sai chandu / Calsoft -- The deployment of Openstack in StarlingX all-in-one Simplex is done successfully. It is containing StarlingX with k8s, EdgeX and Openstack - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007064.html - Setup Document: https://drive.google.com/open?id=110kbsRoBFZQ3J99hobu0QGxC_xchYwKk - Image: https://drive.google.com/open?id=14jxBWnB5ydCk2sf_jc7I2B_e0aTC-rc6 - Proposed rules regarding patch files re: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007070.html - +1 from Saul, this will be discussed in Build meeting - Unit Test - per http://lists.starlingx.io/pipermail/starlingx-discuss/2019-October/006710.html - each sub-project to consider a similar path as what was followed by Config - what level/type of automation testing would provide most benefit to the sub-project - applicability, address current lack of coverage, other considerations - what area of the sub-project's code is most in need of additional coverage - heat map of issues, rate of change, desire to 'open' to non-experts - where should the sub-project invest to increase/maintain quality while enabling contributions - what work is required - example testcases, documentation, test infrastructure to lower the bar for writing effective unit tests? - Doc Process re: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/006985.html - Dev Process: what's the process for modifying this process? re: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007020.html - didn't get to these 2 topics -----Original Message----- From: Zvonar, Bill Sent: Wednesday, November 27, 2019 7:40 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (Nov 27, 2019) Hi all, reminder of the Community call later today. We have many topics on the agenda [0], so we'll likely have to prune it a bit... - standing topics - Gerrit review in need of attention - none so far - Sanity: any RED since last week? - not so far - Unanswered Requests for Help on Mailing List - 4 this week - this week's topics - stx.3.0 update, including gating bugs - PSA re: deployment of OpenStack in StarlingX AIO Simplex - Doc Process - Unit Test - Proposed rules regarding patch files - Dev Process - what's the process for modifying the process Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20191127T1500 From scott.little at windriver.com Mon Dec 2 18:39:02 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 2 Dec 2019 13:39:02 -0500 Subject: [Starlingx-discuss] r/stx.3.0 In-Reply-To: References: <31b882e3-9c06-69b8-919a-c1bd7cbfe313@windriver.com> Message-ID: Please review the following set of changes to set the 3.0 branch SW_VERSION to 19.12.  (i.e. expected release date December 2019) https://review.opendev.org/#/q/status:open+branch:r/stx.3.0+topic:stx_3.0_sw_version Thanks, Scott On 2019-12-02 10:27 a.m., Scott Little wrote: > We have a successful build of r/stx.3.0 on CENGN.  The branch is now > open for submission of your bug fixes. > > Builds can be found under ... > > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/ > > I have one outstanding detail, setting the SW_VERSION to 19.12. Expect > reviews shortly. > > Scott > > > > On 2019-11-28 9:05 a.m., Scott Little wrote: >> It would help if I included a link to all the review wouldn't it. >> >> https://review.opendev.org/#/q/topic:create-r/stx.3.0+(status:open+OR+status:merged) >> >> >> Scott >> >> >> On 2019-11-28 8:55 a.m., Scott Little wrote: >>> All git reviews for creation of branch r/stx.3.0 have been posted. >>> >>> Scott >>> >>> >>> On 2019-11-27 1:41 p.m., Scott Little wrote: >>>> The StarlingX 3.0 release branch (r/stx.3.0) will be created >>>> tomorrow (Nov 28) based on tonights 20191128T023000Z CENGN build >>>> context (assuming a successful build). >>>> >>>> Code review primes can expect to see a small code inspection that >>>> will modify .gitreview to reflect the new branch an all StarlingX >>>> repos.  There will also be a review for the new repo manifest. >>>> Please process these reviews quickly. >>>> >>>> For the master branch, no code freeze is required.  Continue to >>>> work as normal.  However I would request that feature content >>>> intended for 4.0 be held back a day or two until we have a >>>> successful CENGN build of 3.0. >>>> >>>> Please treat the r/stx.3.0 branch as frozen until I have a >>>> successful CENGN build on the new branch.  No gerrit reviews should >>>> be approved except for my manifest and .gitreview changes.   I'll >>>> send a follow up e-mail to lift the freeze. >>>> >>>> Fixes for 3.0 gating issues should continue to be delivered to the >>>> master branch, and subsequently cherry-picked into the r/stx.3.0 >>>> branch. >>>> >>>> Thanks for your cooperation. >>>> >>>> Scott Little >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Mon Dec 2 19:17:31 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 2 Dec 2019 19:17:31 +0000 Subject: [Starlingx-discuss] StarlingX 2.0 Release Upgrade Functionality In-Reply-To: References: <586E8B730EA0DA4A9D6A80A10E486BC007B2BA68@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0C15FD011@ALA-MBD.corp.ad.wrs.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0C16006AE@ALA-MBD.corp.ad.wrs.com> Hi Anirudh, As I mentioned below, the first step is to engage with the distro.openstack sub-project team to discuss the next steps. I can't comment on the plan as the work still needs to be investigated and scoped out. Thanks, Ghada From: Anirudh Gupta [mailto:Anirudh.Gupta at hsc.com] Sent: Wednesday, November 27, 2019 5:18 AM To: Khalil, Ghada; Zvonar, Bill; starlingx-discuss at lists.starlingx.io Cc: Hu, Yong Subject: RE: StarlingX 2.0 Release Upgrade Functionality Hi Ghada, I can discuss with my team members for contributing towards StarlingX development. Can you please respond below to my queries : * What would be the plan of execution? * What would be the scope of work for us? Regards Anirudh Gupta From: Khalil, Ghada > Sent: 26 November 2019 04:35 To: Zvonar, Bill >; Anirudh Gupta >; starlingx-discuss at lists.starlingx.io Cc: Hu, Yong > Subject: RE: StarlingX 2.0 Release Upgrade Functionality Hi Anirudh, We are in the planning phase for stx.4.0 and are looking for developers from the community to look at implementing openstack upgrades with minimal down time. If this is something your team is interested in taking on, I invite you to participate in the distro.openstack meeting to discuss next steps. Thanks, Ghada From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Wednesday, November 20, 2019 8:57 AM To: Anirudh Gupta; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Release Upgrade Functionality Hi Anirudh, don't know if you got an answer directly, but this is not supported. Bill... From: Anirudh Gupta > Sent: Monday, November 11, 2019 3:47 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 2.0 Release Upgrade Functionality Hi Team, I have installed StarlingX 2.0 on Bare metal. It is based on Openstack Stein Release. Is there any option in StarlingX to upgrade the existing Stein Release to latest Train Release with zero down time? Please suggest the steps to do so? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Mon Dec 2 22:14:44 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 2 Dec 2019 22:14:44 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20191202 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-2 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] List of docker images : http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007138.html regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Church at windriver.com Tue Dec 3 00:24:49 2019 From: Robert.Church at windriver.com (Church, Robert) Date: Tue, 3 Dec 2019 00:24:49 +0000 Subject: [Starlingx-discuss] the way to enable swift in starlingx Message-ID: <38ABE432-BFAE-48B9-B300-FD07078B5F06@windriver.com> Sorry for the slow response. I was on vacation last week. The following are from my testing notes when enabling for STX 2.0. This has Also been verified with the latest master build for STX 3.0 NOTES: - Check for any additional platform alarms. I have seen instances when the application apply (after enabling the ceph-rgw chart) will timeout due to high platform CPU usage. A second reapply will be successful. - You can disable the ceph-rgw helm chart and re-apply the application to remove helm release but you will need to manually remove the containerized keystone endpoint as the helm release removal doesn't clean them up. Bob ---------------- For swift to be enabled two services need to be enabled: - The radosgw platform service (to enable the Ceph Rados Gateway for object storage) - The ceph-rgw helm chart to enable the swift endpoints in containerized openstack # Rados Gateway will be disabled by default on a freshly installed system $ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-list --service radosgw +--------------------------------------+---------+---------+-----------------+-------+-------------+----------+ | uuid | service | section | name | value | personality | resource | +--------------------------------------+---------+---------+-----------------+-------+-------------+----------+ | 8357f029-6ad6-4a1e-a2f3-5b969dcb7541 | radosgw | config | fs_size_mb | 25 | None | None | | 9eca56c3-b6b7-4cc5-8981-495611e8839a | radosgw | config | service_enabled | false | None | None | +--------------------------------------+---------+---------+-----------------+-------+-------------+----------+ # And the openstack application will not be installed [sysadmin at controller-0 ~(keystone_admin)]$ system application-list +---------------------+---------+-------------------------------+---------------+---------+-----------+ | application | version | manifest name | manifest file | status | progress | +---------------------+---------+-------------------------------+---------------+---------+-----------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applied | completed | +---------------------+---------+-------------------------------+---------------+---------+-----------+ # Upload the stx-openstack application [sysadmin at controller-0 ~(keystone_admin)]$ system application-upload stx-openstack-1.0-17.tgz +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | False | | app_version | 1.0-17 | | created_at | 2019-07-31T07:41:20.852112+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | uploading | | updated_at | None | +---------------+----------------------------------+ # Verify that the application was successfully uploaded [sysadmin at controller-0 ~(keystone_admin)]$ system application-list +---------------------+---------+-------------------------------+--------------------+----------+-----------+ | application | version | manifest name | manifest file | status | progress | +---------------------+---------+-------------------------------+--------------------+----------+-----------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applied | completed | | stx-openstack | 1.0-17 | armada-manifest | stx-openstack.yaml | uploaded | completed | +---------------------+---------+-------------------------------+--------------------+----------+-----------+ # The ceph-rgw chart is disabled by default as part of the stx-openstack upload [sysadmin at controller-0 ~(keystone_admin)]$ system helm-override-list stx-openstack --long | grep -e enabled -e ceph-rgw -e+ +---------------------+--------------------------------+---------------+ | chart name | overrides namespaces | chart enabled | +---------------------+--------------------------------+---------------+ | ceph-rgw | [u'openstack'] | [False] | +---------------------+--------------------------------+---------------+ # Apply the application [sysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | False | | app_version | 1.0-17 | | created_at | 2019-07-31T07:41:20.852112+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2019-07-31T07:42:01.460523+00:00 | +---------------+----------------------------------+ # Verify that the application is applied [sysadmin at controller-0 ~(keystone_admin)]$ system application-list +---------------------+---------+-------------------------------+--------------------+---------+-----------+ | application | version | manifest name | manifest file | status | progress | +---------------------+---------+-------------------------------+--------------------+---------+-----------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applied | completed | | stx-openstack | 1.0-17 | armada-manifest | stx-openstack.yaml | applied | completed | +---------------------+---------+-------------------------------+--------------------+---------+-----------+ # Enable the radosgw on the platform. # NOTE: that can be done at any time, with or without the stx-openstack uploaded and/or applied # Check that there is no config out-of-date alarms [sysadmin at controller-0 ~(keystone_admin)]$ fm alarm-list | grep 250.001 [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-modify radosgw config service_enabled=true +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | 9eca56c3-b6b7-4cc5-8981-495611e8839a | | service | radosgw | | section | config | | name | service_enabled | | value | true | | personality | None | | resource | None | +-------------+--------------------------------------+ # Observe that there is a config out-of-date alarm [sysadmin at controller-0 ~(keystone_admin)]$ fm alarm-list | grep 250.001 | 250.001 | controller-0 Configuration is out-of-date. | host=controller-0 | major | 2019-07-31T07:43:01. | # Apply the change [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-apply radosgw Applying radosgw service parameters # monitor for when the alarm clears [sysadmin at controller-0 ~(keystone_admin)]$ watch -n 5 "fm alarm-list | grep 250.001" # Enable the ceph-rgw chart [sysadmin at controller-0 ~(keystone_admin)]$ system helm-override-list stx-openstack --long | grep -e enabled -e ceph-rgw -e+ +---------------------+--------------------------------+---------------+ | chart name | overrides namespaces | chart enabled | +---------------------+--------------------------------+---------------+ | ceph-rgw | [u'openstack'] | [False] | +---------------------+--------------------------------+---------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true +------------+--------------------+ | Property | Value | +------------+--------------------+ | attributes | {u'enabled': True} | | name | ceph-rgw | | namespace | openstack | +------------+--------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system helm-override-list stx-openstack --long | grep -e enabled -e ceph-rgw -e+ +---------------------+--------------------------------+---------------+ | chart name | overrides namespaces | chart enabled | +---------------------+--------------------------------+---------------+ | ceph-rgw | [u'openstack'] | [True] | +---------------------+--------------------------------+---------------+ # reapply the app [sysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | True | | app_version | 1.0-17 | | created_at | 2019-07-31T07:41:20.852112+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2019-07-31T08:31:50.558441+00:00 | +---------------+----------------------------------+ # Monitor for when the application has been re-applied [sysadmin at controller-0 ~(keystone_admin)]$ watch -n 5 system application-list +---------------------+---------+-------------------------------+--------------------+---------+-----------+ | application | version | manifest name | manifest file | status | progress | +---------------------+---------+-------------------------------+--------------------+---------+-----------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applied | completed | | stx-openstack | 1.0-17 | armada-manifest | stx-openstack.yaml | applied | completed | +---------------------+---------+-------------------------------+--------------------+---------+-----------+ # Observe swift is enable for the containerized services [sysadmin at controller-0 ~(keystone_admin)]$ export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list | grep object | 09a1414e413847e5b46807034f584628 | RegionOne | swift | object-store | True | admin | http://192.168.204.2:7480/swift/v1 | | 57a7d77823704a89a352d14ec8563cde | RegionOne | swift | object-store | True | public | http://10.10.10.2:7480/swift/v1 | | 57f65799d37f442fb3c816dee8b84f71 | RegionOne | swift | object-store | True | internal | http://192.168.204.2:7480/swift/v1 | # But not the platform [sysadmin at controller-0 ~(keystone_admin)]$ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list | grep object On 12/2/19, 9:28 AM, "Chen, Haochuan Z" wrote: You should check in such way [sysadmin at controller-0 ~(keystone_admin)]$ openstack --os-username 'admin' --os-password 'Local.123' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-identity-api-version 3 --os-interface internal --os-region-name RegionOne endpoint list | grep object | a67918db88604bcc87504c3ec72c745c | RegionOne | swift | object-store | True | public | http://10.10.10.3:7480/swift/v1 | | d0974532671d457995ccd5c8e2c5f5eb | RegionOne | swift | object-store | True | admin | http://192.168.204.1:7480/swift/v1 | | e096598cdbec45878240aa5ac75e2047 | RegionOne | swift | object-store | True | internal | http://192.168.204.1:7480/swift/v1 | [sysadmin at controller-0 ~(keystone_admin)]$ BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: von Hoesslin, Volker Sent: Monday, December 2, 2019 9:00 PM To: Chen, Haochuan Z ; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io' ; Hu, Yong Subject: AW: the way to enable swift in starlingx hi, thx for this code-snippets but doesnt work for me :( there are no errors but also no new endpoint... controller-0:~$ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-modify radosgw config service_enabled=true +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | ee200d06-d800-4dfa-83aa-b35d1fde61f6 | | service | radosgw | | section | config | | name | service_enabled | | value | true | | personality | None | | resource | None | +-------------+--------------------------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-apply radosgw Applying radosgw service parameters [sysadmin at controller-0 ~(keystone_admin)]$ system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true +------------+--------------------+ | Property | Value | +------------+--------------------+ | attributes | {u'enabled': True} | | name | ceph-rgw | | namespace | openstack | +------------+--------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | True | | app_version | 1.0-17-centos-stable-latest | | created_at | 2019-11-18T16:21:19.076937+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2019-11-28T16:37:31.859690+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. [sysadmin at controller-0 ~(keystone_admin)]$ watch system application-show stx-openstack [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list | grep object [sysadmin at controller-0 ~(keystone_admin)]$ [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | f279f864c46e469cafa16fd77d0605b0 | RegionOne | fm | faultmanagement | True | admin | http://192.168.204.2:18002 | | aad37714717540e49af212768daf9258 | RegionOne | fm | faultmanagement | True | internal | http://192.168.204.2:18002 | | 7a7a980041e94eda8e0575ec7f6472c2 | RegionOne | fm | faultmanagement | True | public | http://10.10.10.2:18002 | | d7765db9214249e2ae97958043521061 | RegionOne | patching | patching | True | admin | http://192.168.204.2:5491 | | 678a5de6ff9044afacd145fbdad234a1 | RegionOne | patching | patching | True | internal | http://192.168.204.2:5491 | | 2c6e5794267844f4af577bc71183e0f7 | RegionOne | patching | patching | True | public | http://10.10.10.2:15491 | | b21e663dd3894ebb8465b4b8418f2a47 | RegionOne | vim | nfv | True | admin | http://192.168.204.2:4545 | | 185182d4b1e740ee9e6311a55791fed7 | RegionOne | vim | nfv | True | internal | http://192.168.204.2:4545 | | 63d6b149358b475eb97d97bd3414048e | RegionOne | vim | nfv | True | public | http://10.10.10.2:4545 | | 2cc7741368cd4ca1920322f601ece48c | RegionOne | smapi | smapi | True | admin | http://192.168.204.2:7777 | | 9bd331db8f9d455ab06ef7dc4dc79660 | RegionOne | smapi | smapi | True | internal | http://192.168.204.2:7777 | | bae82a84d6e94cd198ec2c73b2dba2c0 | RegionOne | smapi | smapi | True | public | http://10.10.10.2:7777 | | 594394a663c64fe484c097fcfbf8b2db | RegionOne | keystone | identity | True | admin | http://192.168.204.2:5000/v3 | | 3804a32d209a4a929cff8321c408fe5e | RegionOne | keystone | identity | True | internal | http://192.168.204.2:5000/v3 | | 69f30354ea1c407480de2a70719f65d5 | RegionOne | keystone | identity | True | public | http://10.10.10.2:5000/v3 | | 0a5686180b104c708e741048a5ddca86 | RegionOne | barbican | key-manager | True | admin | http://192.168.204.2:9311 | | b8243821ca9443b2ab1036a35157ba73 | RegionOne | barbican | key-manager | True | internal | http://192.168.204.2:9311 | | d7284357f1374940bee22a6207152f39 | RegionOne | barbican | key-manager | True | public | http://10.10.10.2:9311 | | da935387386442b995b1812b307e228a | RegionOne | sysinv | platform | True | admin | http://192.168.204.2:6385/v1 | | d135073f7952407fbccc113e2ebfc296 | RegionOne | sysinv | platform | True | internal | http://192.168.204.2:6385/v1 | | b54be3ccb9804af0bfc579a12ea5afcc | RegionOne | sysinv | platform | True | public | http://10.10.10.2:6385/v1 | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ any suggestions? greez & thx, volker... ________________________________________ Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] Gesendet: Montag, 2. Dezember 2019 05:52 An: von Hoesslin, Volker; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: the way to enable swift in starlingx Hi system application-upload system service-parameter-modify radosgw config service_enabled=true system service-parameter-apply radosgw system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true system application-apply stx-openstack openstack endpoint list | grep object BR! -----Original Message----- From: Chen, Haochuan Z Sent: Thursday, November 28, 2019 10:12 AM To: starlingx-discuss at lists.starlingx.io Cc: Volker.Hoesslin at swsn.de; ji at sibyl.li Subject: swift enabling Hi I find there is voice to enable swift, why it was removed? Thanks! Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Message: 3 Date: Fri, 22 Nov 2019 14:14:15 +0000 From: "von Hoesslin, Volker" To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] STX 2.0: swift? Message-ID: Content-Type: text/plain; charset="iso-8859-1" hi, how can i add the object storage (swift) feature to my current STX2.0 openstack? BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io Sent: Tuesday, November 26, 2019 12:05 AM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 18, Issue 150 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. Re: controller filesystem (Waines, Greg) 2. Enabling Object Storage (joji vlogs) 3. Re: StarlingX 2.0 Account Locked for User (Andy Ning) ---------------------------------------------------------------------- Message: 1 Date: Mon, 25 Nov 2019 13:26:31 +0000 From: "Waines, Greg" To: Saul Wold , "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Message-ID: Content-Type: text/plain; charset="utf-8" Actually a section on managing filesystems on the controllers is planned for STX 3.0 ... based on the proposed TOC for the new Operations Guide. Greg From: Saul Wold Date: Thursday, November 21, 2019 at 12:13 PM To: "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Kristal: We probably need a Story/Task added to the documentation to get the documentation of host-fs added in the right place. This helps with filesystem re-sizing. Maybe it's already there. Sau! On 11/21/19 7:12 AM, von Hoesslin, Volker wrote: incredible !!! thats the point i have search! big thx! ------------------------------------------------------------------------ *Von:* Sun, Austin [austin.sun at intel.com] *Gesendet:* Donnerstag, 21. November 2019 15:20 *An:* von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io *Betreff:* Re: [Starlingx-discuss] controller filesystem Hi Volker: From the email chain, http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005656.html You can probably the command line to change the size of docker lv Thanks. BR Austin Sun. *From:* von Hoesslin, Volker > *Sent:* Thursday, November 21, 2019 9:20 PM *To:* starlingx-discuss at lists.starlingx.io *Subject:* [Starlingx-discuss] controller filesystem hi, i trying to import some existing qcow2 images into my new installed STX 2.0. openstack image create --file /media/foobar.qcow2 --private --unprotected --disk-format qcow2 "foobar" all works fine, the image are available. now, i'm trying to create an new volume based on this images. if the images are <=6GB all works fine, but some images are very huge (10-200GB) and then it ends in an error. after some research i can see on controller the mount point /dev/mapper/cgts--vg-docker--lv 30G 11G 20G 35% /var/lib/docker increase the used storage. after fail "volume create" it goes back to given value 35%. in my oppinion, this mount point should resize to some value about 300-400GB, but how? in STX horizon backend (http://10.10.10.2:8080/admin/system_config/?tab=system_config_tab__storage_table) there is an "docker-distribution", but no docker-mount-point itself? btw, if i try to change the "docker-distribution" value to some other value (eg. 500GB via horizon backend), i got this error: *Error: *backup size of 60 is insufficient for host controller-1. Minimum backup size of 100 is required based upon glance size 20 and database size 20. Rejecting modification request. - see attachment - how can i increase the backup size to handle this error?! greez & thx, volker... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Austin Gillmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 25 Nov 2019 11:05:19 -0500 From: Andy Ning To: Subject: Re: [Starlingx-discuss] StarlingX 2.0 Account Locked for User Message-ID: <8f8e6cad-0ad7-4be4-6693-a25e6cec93ad at windriver.com> Content-Type: text/plain; charset="utf-8"; format=flowed On 2019-11-25 12:00 AM, Yong Hu wrote: > Hi Anirudh, > > This issue is similar to this LP#1853017 [0], which was triggered the > account locking if the password of Openstack user "admin" was changed. > In this LP, for unknown reason, "registry-token-server" daemon kept > accessing "keystone" on host (not the instance in containers) with > obsolete token and led to "admin" account locked after 5 attempts. > If registry token server keeps on accessing keystone with obsolete token, this could be a bug in the token server. Normally if a keystone client get a failed authentication, it should try to retrieve a new token by using username/password. Andy > Right now, I am debugging this issue. > Good thing is Cengn build 11/16 seemed not to have such a problem > (LP#1853017). > > You might have a try with this version? > BTW: which cengn build were you using? > > > [0] https://bugs.launchpad.net/starlingx/+bug/1853017 > > regards, > Yong > > On 2019/11/19 6:06 PM, Anirudh Gupta wrote: >> Hi Team, >> >> I have installed StarlingX 2.0 Duplex Bare Metal. >> >> I am trying to create 2 VM's and repeating this cycle a number of times. >> >> After using the setup for around half and hour, I am not longer able >> to access the GUI. >> >> Ever though I type correct Username/Password, it gives an error of >> Invalid Credentials. >> >> Then, I thought to use the CLI commands by following LOAD CLI section >> given in link >> >> https://docs.starlingx.io/deploy_install_guides/r2_release/openstack/ >> access.html#local-cli >> >> >> But with this also, I am facing the same error >> >> controller-0:~$ export OS_CLOUD=openstack_helm >> >> controller-0:~$ openstack endpoint list >> >> The account is locked for user: 230578cde382430a8adac399afab1230. >> (HTTP 401) (Request-ID: req-6da6d59a-2edd-4f2b-a8bf-f13f2e423a77) >> >> Earlier this automatically started working after 5-7 mins. >> >> But this time, I am completely Blocked. >> >> I have also raised a bug regarding the same >> >> https://bugs.launchpad.net/starlingx/+bug/1853093 >> >> Please suggest some pointers, so that I can unblock and resume my >> activities. >> >> Regards >> >> Anirudh Gupta >> >> DISCLAIMER: This electronic message and all of its contents, contains >> information which is privileged, confidential or otherwise protected >> from disclosure. The information contained in this electronic mail >> transmission is intended for use only by the individual or entity to >> which it is addressed. If you are not the intended recipient or may >> have received this electronic mail transmission in error, please >> notify the sender immediately and delete / destroy all copies of this >> electronic mail transmission without disclosing, copying, >> distributing, forwarding, printing or retaining any part of it. >> Hughes Systique accepts no responsibility for loss or damage arising >> from the use of the information transmitted by this email including >> damage from virus. >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 18, Issue 150 ************************************************** _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From shuicheng.lin at intel.com Tue Dec 3 00:27:02 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 3 Dec 2019 00:27:02 +0000 Subject: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA In-Reply-To: References: <9700A18779F35F49AF027300A49E7C76608ED666@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608F5759@SHSMSX105.ccr.corp.intel.com> Hi Anirudh, What is the purpose or benefit you want to have 2 active controller? I think it is reasonable there is 1 active controller only, and another controller is standby for backup, this is the meaning of HA. Two active controller at the same time will cause brain split. What do you mean "Openstack Services"? Openstack is containerized now, and run in both nodes by K8S. Best Regards Shuicheng From: Anirudh Gupta Sent: Monday, December 2, 2019 6:59 PM To: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Shuicheng, All the services on the Active Controller are "enabled-active" and I can see some services "enabled-standby" on the standby controller. Is there any way we can set all the Openstack Services "enabled-active" on both the Active and Standby Node? Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 01 December 2019 12:42 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Anirudh, Per my understanding, for duplex, there is always 1 active node and 1 standby node. The "active/active" or "active/standby" in the document is for "services", not for node. If you try to run "sudo sm-dump" in the standby node, you will find some services are "active", while some services are "standby". For the 2nd question, VMs are running in compute node. And for duplex, both controller nodes are compute nodes also. And the "Active/Standby" is for controller, not for compute function, that is why VMs will run in both node. Best Regards Shuicheng From: Anirudh Gupta > Sent: Thursday, November 28, 2019 11:45 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, Can someone please give me any update on my Query: I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html * Can someone please suggest the steps to configure Active-Active state? * I have 2 kubernetes pods corresponding to each Openstack Service in Duplex Setup and when I spawn any VM, it goes on any of the two controllers. So, is this the standard implementation in StarlingX? What needs to be done in Active-Standby configuration? * And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta From: Anirudh Gupta Sent: 26 November 2019 10:30 To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html Can someone please suggest the steps to configure Active-Active state? And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Tue Dec 3 00:54:55 2019 From: yong.hu at intel.com (Yong Hu) Date: Tue, 3 Dec 2019 08:54:55 +0800 Subject: [Starlingx-discuss] the way to enable swift in starlingx In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D85628AABD@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D85628A95E@CDSMSX102.ccr.corp.intel.com> <56829C2A36C2E542B0CCB9854828E4D85628AABD@CDSMSX102.ccr.corp.intel.com> Message-ID: <704921ef-e3a2-5eae-2018-15b6f0a2f3d8@intel.com> Hi Volker, The major diff here was "--os-auth-url http://keystone.openstack.svc.cluster.local/v3", which meant the containerized OpenStack Keystone (for OpenStack services). While, when you directly run cmd "openstack endpoint list" in the context with env parameters (applied by "/etc/platform/openrc"), you were using platform keystone (--os-auth-url http://:5000/v3). So, what you saw were endpoints on host. regards, Yong On 2019/12/2 11:26 PM, Chen, Haochuan Z wrote: > You should check in such way > > [sysadmin at controller-0 ~(keystone_admin)]$ openstack --os-username 'admin' --os-password 'Local.123' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-identity-api-version 3 --os-interface internal --os-region-name RegionOne endpoint list | grep object > | a67918db88604bcc87504c3ec72c745c | RegionOne | swift | object-store | True | public | http://10.10.10.3:7480/swift/v1 | > | d0974532671d457995ccd5c8e2c5f5eb | RegionOne | swift | object-store | True | admin | http://192.168.204.1:7480/swift/v1 | > | e096598cdbec45878240aa5ac75e2047 | RegionOne | swift | object-store | True | internal | http://192.168.204.1:7480/swift/v1 | > [sysadmin at controller-0 ~(keystone_admin)]$ > > BR! > > Martin, Chen > SSP, Software Engineer > 021-61164330 > > -----Original Message----- > From: von Hoesslin, Volker > Sent: Monday, December 2, 2019 9:00 PM > To: Chen, Haochuan Z ; 'ji at sibyl.li' > Cc: 'starlingx-discuss at lists.starlingx.io' ; Hu, Yong > Subject: AW: the way to enable swift in starlingx > > hi, > thx for this code-snippets but doesnt work for me :( there are no errors but also no new endpoint... > > controller-0:~$ source /etc/platform/openrc > [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-modify radosgw config service_enabled=true > +-------------+--------------------------------------+ > | Property | Value | > +-------------+--------------------------------------+ > | uuid | ee200d06-d800-4dfa-83aa-b35d1fde61f6 | > | service | radosgw | > | section | config | > | name | service_enabled | > | value | true | > | personality | None | > | resource | None | > +-------------+--------------------------------------+ > [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-apply radosgw Applying radosgw service parameters > [sysadmin at controller-0 ~(keystone_admin)]$ system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true > +------------+--------------------+ > | Property | Value | > +------------+--------------------+ > | attributes | {u'enabled': True} | > | name | ceph-rgw | > | namespace | openstack | > +------------+--------------------+ > [sysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack > +---------------+----------------------------------+ > | Property | Value | > +---------------+----------------------------------+ > | active | True | > | app_version | 1.0-17-centos-stable-latest | > | created_at | 2019-11-18T16:21:19.076937+00:00 | > | manifest_file | stx-openstack.yaml | > | manifest_name | armada-manifest | > | name | stx-openstack | > | progress | None | > | status | applying | > | updated_at | 2019-11-28T16:37:31.859690+00:00 | > +---------------+----------------------------------+ > Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. > [sysadmin at controller-0 ~(keystone_admin)]$ watch system application-show stx-openstack > [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list | grep object > [sysadmin at controller-0 ~(keystone_admin)]$ > [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list > +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ > | ID | Region | Service Name | Service Type | Enabled | Interface | URL | > +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ > | f279f864c46e469cafa16fd77d0605b0 | RegionOne | fm | faultmanagement | True | admin | http://192.168.204.2:18002 | > | aad37714717540e49af212768daf9258 | RegionOne | fm | faultmanagement | True | internal | http://192.168.204.2:18002 | > | 7a7a980041e94eda8e0575ec7f6472c2 | RegionOne | fm | faultmanagement | True | public | http://10.10.10.2:18002 | > | d7765db9214249e2ae97958043521061 | RegionOne | patching | patching | True | admin | http://192.168.204.2:5491 | > | 678a5de6ff9044afacd145fbdad234a1 | RegionOne | patching | patching | True | internal | http://192.168.204.2:5491 | > | 2c6e5794267844f4af577bc71183e0f7 | RegionOne | patching | patching | True | public | http://10.10.10.2:15491 | > | b21e663dd3894ebb8465b4b8418f2a47 | RegionOne | vim | nfv | True | admin | http://192.168.204.2:4545 | > | 185182d4b1e740ee9e6311a55791fed7 | RegionOne | vim | nfv | True | internal | http://192.168.204.2:4545 | > | 63d6b149358b475eb97d97bd3414048e | RegionOne | vim | nfv | True | public | http://10.10.10.2:4545 | > | 2cc7741368cd4ca1920322f601ece48c | RegionOne | smapi | smapi | True | admin | http://192.168.204.2:7777 | > | 9bd331db8f9d455ab06ef7dc4dc79660 | RegionOne | smapi | smapi | True | internal | http://192.168.204.2:7777 | > | bae82a84d6e94cd198ec2c73b2dba2c0 | RegionOne | smapi | smapi | True | public | http://10.10.10.2:7777 | > | 594394a663c64fe484c097fcfbf8b2db | RegionOne | keystone | identity | True | admin | http://192.168.204.2:5000/v3 | > | 3804a32d209a4a929cff8321c408fe5e | RegionOne | keystone | identity | True | internal | http://192.168.204.2:5000/v3 | > | 69f30354ea1c407480de2a70719f65d5 | RegionOne | keystone | identity | True | public | http://10.10.10.2:5000/v3 | > | 0a5686180b104c708e741048a5ddca86 | RegionOne | barbican | key-manager | True | admin | http://192.168.204.2:9311 | > | b8243821ca9443b2ab1036a35157ba73 | RegionOne | barbican | key-manager | True | internal | http://192.168.204.2:9311 | > | d7284357f1374940bee22a6207152f39 | RegionOne | barbican | key-manager | True | public | http://10.10.10.2:9311 | > | da935387386442b995b1812b307e228a | RegionOne | sysinv | platform | True | admin | http://192.168.204.2:6385/v1 | > | d135073f7952407fbccc113e2ebfc296 | RegionOne | sysinv | platform | True | internal | http://192.168.204.2:6385/v1 | > | b54be3ccb9804af0bfc579a12ea5afcc | RegionOne | sysinv | platform | True | public | http://10.10.10.2:6385/v1 | > +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ > > any suggestions? > > greez & thx, volker... > > ________________________________________ > Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] > Gesendet: Montag, 2. Dezember 2019 05:52 > An: von Hoesslin, Volker; 'ji at sibyl.li' > Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong > Betreff: the way to enable swift in starlingx > > Hi > > system application-upload > > system service-parameter-modify radosgw config service_enabled=true > > system service-parameter-apply radosgw > > system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true > > system application-apply stx-openstack > > openstack endpoint list | grep object > > > BR! > > -----Original Message----- > From: Chen, Haochuan Z > Sent: Thursday, November 28, 2019 10:12 AM > To: starlingx-discuss at lists.starlingx.io > Cc: Volker.Hoesslin at swsn.de; ji at sibyl.li > Subject: swift enabling > > Hi > > I find there is voice to enable swift, why it was removed? > > Thanks! > > > Message: 2 > Date: Mon, 25 Nov 2019 07:53:31 -0600 > From: joji vlogs > To: "starlingx-discuss at lists.starlingx.io" > > Subject: [Starlingx-discuss] Enabling Object Storage > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi all, > > I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. > > My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. > > Thank you and have a great day! > > > > Message: 3 > Date: Fri, 22 Nov 2019 14:14:15 +0000 > From: "von Hoesslin, Volker" > To: "starlingx-discuss at lists.starlingx.io" > > Subject: [Starlingx-discuss] STX 2.0: swift? > Message-ID: > Content-Type: text/plain; charset="iso-8859-1" > > hi, > how can i add the object storage (swift) feature to my current STX2.0 openstack? > > > BR! > > Martin, Chen > SSP, Software Engineer > 021-61164330 > > -----Original Message----- > From: starlingx-discuss-request at lists.starlingx.io > Sent: Tuesday, November 26, 2019 12:05 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Starlingx-discuss Digest, Vol 18, Issue 150 > > Send Starlingx-discuss mailing list submissions to > starlingx-discuss at lists.starlingx.io > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > or, via email, send a message with subject or body 'help' to > starlingx-discuss-request at lists.starlingx.io > > You can reach the person managing the list at > starlingx-discuss-owner at lists.starlingx.io > > When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." > > > Today's Topics: > > 1. Re: controller filesystem (Waines, Greg) > 2. Enabling Object Storage (joji vlogs) > 3. Re: StarlingX 2.0 Account Locked for User (Andy Ning) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 25 Nov 2019 13:26:31 +0000 > From: "Waines, Greg" > To: Saul Wold , "von Hoesslin, Volker" > , "Sun, Austin" , > "starlingx-discuss at lists.starlingx.io" > , "Dale, Kristal" > > Subject: Re: [Starlingx-discuss] controller filesystem > Message-ID: > Content-Type: text/plain; charset="utf-8" > > Actually a section on managing filesystems on the controllers is planned for STX 3.0 ... based on the proposed TOC for the new Operations Guide. > Greg > > From: Saul Wold > Date: Thursday, November 21, 2019 at 12:13 PM > To: "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" > Subject: Re: [Starlingx-discuss] controller filesystem > > Kristal: > > We probably need a Story/Task added to the documentation to get the documentation of host-fs added in the right place. This helps with filesystem re-sizing. > > Maybe it's already there. > > Sau! > > On 11/21/19 7:12 AM, von Hoesslin, Volker wrote: > incredible !!! thats the point i have search! big thx! > ------------------------------------------------------------------------ > *Von:* Sun, Austin [austin.sun at intel.com] > *Gesendet:* Donnerstag, 21. November 2019 15:20 > *An:* von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io > *Betreff:* Re: [Starlingx-discuss] controller filesystem Hi Volker: > From the email chain, > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005656.html > You can probably the command line to change the size of docker lv Thanks. > BR > Austin Sun. > *From:* von Hoesslin, Volker > > *Sent:* Thursday, November 21, 2019 9:20 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] controller filesystem hi, i trying to import some existing qcow2 images into my new installed STX 2.0. > openstack image create --file /media/foobar.qcow2 --private --unprotected --disk-format qcow2 "foobar" > all works fine, the image are available. > now, i'm trying to create an new volume based on this images. if the images are <=6GB all works fine, but some images are very huge > (10-200GB) and then it ends in an error. after some research i can see on controller the mount point > /dev/mapper/cgts--vg-docker--lv 30G 11G 20G 35% /var/lib/docker > increase the used storage. after fail "volume create" it goes back to given value 35%. in my oppinion, this mount point should resize to some value about 300-400GB, but how? in STX horizon backend > (http://10.10.10.2:8080/admin/system_config/?tab=system_config_tab__storage_table) > there is an "docker-distribution", but no docker-mount-point itself? > btw, if i try to change the "docker-distribution" value to some other value (eg. 500GB via horizon backend), i got this error: > *Error: *backup size of 60 is insufficient for host controller-1. > Minimum backup size of 100 is required based upon glance size 20 and database size 20. Rejecting modification request. > - see attachment - > how can i increase the backup size to handle this error?! > greez & thx, volker... > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Mon, 25 Nov 2019 07:53:31 -0600 > From: joji vlogs > To: "starlingx-discuss at lists.starlingx.io" > > Subject: [Starlingx-discuss] Enabling Object Storage > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi all, > > I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. > > My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. > > Thank you and have a great day! > Austin Gillmann > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 3 > Date: Mon, 25 Nov 2019 11:05:19 -0500 > From: Andy Ning > To: > Subject: Re: [Starlingx-discuss] StarlingX 2.0 Account > Locked for User > Message-ID: <8f8e6cad-0ad7-4be4-6693-a25e6cec93ad at windriver.com> > Content-Type: text/plain; charset="utf-8"; format=flowed > > > > On 2019-11-25 12:00 AM, Yong Hu wrote: >> Hi Anirudh, >> >> This issue is similar to this LP#1853017 [0], which was triggered the >> account locking if the password of Openstack user "admin" was changed. >> In this LP, for unknown reason, "registry-token-server" daemon kept >> accessing "keystone" on host (not the instance in containers) with >> obsolete token and led to "admin" account locked after 5 attempts. >> > If registry token server keeps on accessing keystone with obsolete token, this could be a bug in the token server. Normally if a keystone client get a failed authentication, it should try to retrieve a new token by using username/password. > > Andy > >> Right now, I am debugging this issue. >> Good thing is Cengn build 11/16 seemed not to have such a problem >> (LP#1853017). >> >> You might have a try with this version? >> BTW: which cengn build were you using? >> >> >> [0] https://bugs.launchpad.net/starlingx/+bug/1853017 >> >> regards, >> Yong >> >> On 2019/11/19 6:06 PM, Anirudh Gupta wrote: >>> Hi Team, >>> >>> I have installed StarlingX 2.0 Duplex Bare Metal. >>> >>> I am trying to create 2 VM's and repeating this cycle a number of times. >>> >>> After using the setup for around half and hour, I am not longer able >>> to access the GUI. >>> >>> Ever though I type correct Username/Password, it gives an error of >>> Invalid Credentials. >>> >>> Then, I thought to use the CLI commands by following LOAD CLI section >>> given in link >>> >>> https://docs.starlingx.io/deploy_install_guides/r2_release/openstack/ >>> access.html#local-cli >>> >>> >>> But with this also, I am facing the same error >>> >>> controller-0:~$ export OS_CLOUD=openstack_helm >>> >>> controller-0:~$ openstack endpoint list >>> >>> The account is locked for user: 230578cde382430a8adac399afab1230. >>> (HTTP 401) (Request-ID: req-6da6d59a-2edd-4f2b-a8bf-f13f2e423a77) >>> >>> Earlier this automatically started working after 5-7 mins. >>> >>> But this time, I am completely Blocked. >>> >>> I have also raised a bug regarding the same >>> >>> https://bugs.launchpad.net/starlingx/+bug/1853093 >>> >>> Please suggest some pointers, so that I can unblock and resume my >>> activities. >>> >>> Regards >>> >>> Anirudh Gupta >>> >>> DISCLAIMER: This electronic message and all of its contents, contains >>> information which is privileged, confidential or otherwise protected >>> from disclosure. The information contained in this electronic mail >>> transmission is intended for use only by the individual or entity to >>> which it is addressed. If you are not the intended recipient or may >>> have received this electronic mail transmission in error, please >>> notify the sender immediately and delete / destroy all copies of this >>> electronic mail transmission without disclosing, copying, >>> distributing, forwarding, printing or retaining any part of it. >>> Hughes Systique accepts no responsibility for loss or damage arising >>> from the use of the information transmitted by this email including >>> damage from virus. >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > -- > Andy Ning > Cube: 3071 > Tel: 613-9631408 (int: 4408) > Skype: andy.ning.wr > > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > ------------------------------ > > End of Starlingx-discuss Digest, Vol 18, Issue 150 > ************************************************** > From Robert.Church at windriver.com Tue Dec 3 02:34:09 2019 From: Robert.Church at windriver.com (Church, Robert) Date: Tue, 3 Dec 2019 02:34:09 +0000 Subject: [Starlingx-discuss] ceph ops enabling in sysinv-conductor In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D85628A993@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D85628A993@CDSMSX102.ccr.corp.intel.com> Message-ID: Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church Cc: "'starlingx-discuss at lists.starlingx.io'" , Ovidiu Poncea Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Tue Dec 3 02:46:30 2019 From: yong.hu at intel.com (Yong Hu) Date: Tue, 3 Dec 2019 10:46:30 +0800 Subject: [Starlingx-discuss] [stx.distro.openstack] WW49 project meeting - https://zoom.us/j/342730236 Message-ID: Hi folks, Today we are going to have the project meeting (10:00 PM China time, and 6:00 AM US PST), your participation will be appreciated. Here is the agenda for WW49: 1. stx.3.0: moving toward to RC, branch "r/stx.3.0" was made. patches go to master first and cherry pick to r/stx.3.0 per needs. 2. stx.3.0 BUG review [1]: we are having 3 HIGH for stx.3.0 and 20+ medium. please *update your progress* in LP before the meeting. 3. stx.4.0 planning - Openstack containerized services upgrade from Train to "U" - Openstack clients upgrade - Openstack services on host (keystone, horizon, and barbican) [0] distro.openstack etherpad: https://etherpad.openstack.org/p/stx-distro-openstack-meetings [1] distro.openstack Launchpad: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.openstack [2] zoom bridge for meeting: https://zoom.us/j/342730236 regards, Yong From austin.sun at intel.com Tue Dec 3 05:30:18 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 3 Dec 2019 05:30:18 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 12/04/2019 Message-ID: Hi All: Agenda for 12/04 meeting: * stx.4.0 feature - Ceph containerization update (Tingjie/martin) - Standardize Flock Package Versioning (Yang Bin) - Kata Container (Shuicheng) - CentOS 8.0 upgrade planning (Shuai Zhao) * stx 3.0 bugs fix - CVE issue tracking (Shuicheng) * OVMF * kernel change upgrade 1062 - Storage issue tracking (Tingjie) 4 medium open https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.storage LP#1826886 cinder cmd not working intermittently ----Ma zheng LP#1844164 alarm 800.001 raised on lock storage-0 and not cleared when storage-0 unlocks --- Martin LP#1847336 IPv6 Distributed Cloud: ansible-playbook 'Wipe ceph osds' does not support re-play / re-entrance ---- Ovidiu LP#1848198 Glance backend present on non-openstack deployment ---- Stefan Dinescu - Others issue tracking (Austin) 1 medium open https://bugs.launchpad.net/starlingx/+bug/1847335 * Opens (All) Update the agenda if other topic to be discussed : https://etherpad.openstack.org/p/stx-distro-other Thanks. BR Austin Sun. From Volker.Hoesslin at swsn.de Tue Dec 3 08:06:08 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Tue, 3 Dec 2019 08:06:08 +0000 Subject: [Starlingx-discuss] the way to enable swift in starlingx In-Reply-To: <2at5p001dm3gsa8l@shdsegapp1> References: <56829C2A36C2E542B0CCB9854828E4D85628A95E@CDSMSX102.ccr.corp.intel.com> , <2at5p001dm3gsa8l@shdsegapp1> Message-ID: sure! sry, my fault, again :( controller-0:~$ openstack endpoint list | grep object | 5a7d4436ebff4c4fb4a871b2ed6699dd | RegionOne | swift | object-store | True | admin | http://192.168.204.2:7480/swift/v1 | | 6bab53dff7374761bcac4da9c45eeccc | RegionOne | swift | object-store | True | public | http://10.10.10.2:7480/swift/v1 | | c0f3b03756aa42e8827a95fca55b1331 | RegionOne | swift | object-store | True | internal | http://192.168.204.2:7480/swift/v1 | by the way, is there any way or best practise to publish the swift endpoint. the given endpoints are not the right one for me. admin and internal are the realy internal networks, so no one should access this network. the public endpoint is inside my OAM and also should no public traffic pass this network. greez & thx, volker... ________________________________________ Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] Gesendet: Montag, 2. Dezember 2019 16:26 An: von Hoesslin, Volker; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: Re: [Starlingx-discuss] the way to enable swift in starlingx You should check in such way [sysadmin at controller-0 ~(keystone_admin)]$ openstack --os-username 'admin' --os-password 'Local.123' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-identity-api-version 3 --os-interface internal --os-region-name RegionOne endpoint list | grep object | a67918db88604bcc87504c3ec72c745c | RegionOne | swift | object-store | True | public | http://10.10.10.3:7480/swift/v1 | | d0974532671d457995ccd5c8e2c5f5eb | RegionOne | swift | object-store | True | admin | http://192.168.204.1:7480/swift/v1 | | e096598cdbec45878240aa5ac75e2047 | RegionOne | swift | object-store | True | internal | http://192.168.204.1:7480/swift/v1 | [sysadmin at controller-0 ~(keystone_admin)]$ BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: von Hoesslin, Volker Sent: Monday, December 2, 2019 9:00 PM To: Chen, Haochuan Z ; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io' ; Hu, Yong Subject: AW: the way to enable swift in starlingx hi, thx for this code-snippets but doesnt work for me :( there are no errors but also no new endpoint... controller-0:~$ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-modify radosgw config service_enabled=true +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | ee200d06-d800-4dfa-83aa-b35d1fde61f6 | | service | radosgw | | section | config | | name | service_enabled | | value | true | | personality | None | | resource | None | +-------------+--------------------------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-apply radosgw Applying radosgw service parameters [sysadmin at controller-0 ~(keystone_admin)]$ system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true +------------+--------------------+ | Property | Value | +------------+--------------------+ | attributes | {u'enabled': True} | | name | ceph-rgw | | namespace | openstack | +------------+--------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | True | | app_version | 1.0-17-centos-stable-latest | | created_at | 2019-11-18T16:21:19.076937+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2019-11-28T16:37:31.859690+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. [sysadmin at controller-0 ~(keystone_admin)]$ watch system application-show stx-openstack [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list | grep object [sysadmin at controller-0 ~(keystone_admin)]$ [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | f279f864c46e469cafa16fd77d0605b0 | RegionOne | fm | faultmanagement | True | admin | http://192.168.204.2:18002 | | aad37714717540e49af212768daf9258 | RegionOne | fm | faultmanagement | True | internal | http://192.168.204.2:18002 | | 7a7a980041e94eda8e0575ec7f6472c2 | RegionOne | fm | faultmanagement | True | public | http://10.10.10.2:18002 | | d7765db9214249e2ae97958043521061 | RegionOne | patching | patching | True | admin | http://192.168.204.2:5491 | | 678a5de6ff9044afacd145fbdad234a1 | RegionOne | patching | patching | True | internal | http://192.168.204.2:5491 | | 2c6e5794267844f4af577bc71183e0f7 | RegionOne | patching | patching | True | public | http://10.10.10.2:15491 | | b21e663dd3894ebb8465b4b8418f2a47 | RegionOne | vim | nfv | True | admin | http://192.168.204.2:4545 | | 185182d4b1e740ee9e6311a55791fed7 | RegionOne | vim | nfv | True | internal | http://192.168.204.2:4545 | | 63d6b149358b475eb97d97bd3414048e | RegionOne | vim | nfv | True | public | http://10.10.10.2:4545 | | 2cc7741368cd4ca1920322f601ece48c | RegionOne | smapi | smapi | True | admin | http://192.168.204.2:7777 | | 9bd331db8f9d455ab06ef7dc4dc79660 | RegionOne | smapi | smapi | True | internal | http://192.168.204.2:7777 | | bae82a84d6e94cd198ec2c73b2dba2c0 | RegionOne | smapi | smapi | True | public | http://10.10.10.2:7777 | | 594394a663c64fe484c097fcfbf8b2db | RegionOne | keystone | identity | True | admin | http://192.168.204.2:5000/v3 | | 3804a32d209a4a929cff8321c408fe5e | RegionOne | keystone | identity | True | internal | http://192.168.204.2:5000/v3 | | 69f30354ea1c407480de2a70719f65d5 | RegionOne | keystone | identity | True | public | http://10.10.10.2:5000/v3 | | 0a5686180b104c708e741048a5ddca86 | RegionOne | barbican | key-manager | True | admin | http://192.168.204.2:9311 | | b8243821ca9443b2ab1036a35157ba73 | RegionOne | barbican | key-manager | True | internal | http://192.168.204.2:9311 | | d7284357f1374940bee22a6207152f39 | RegionOne | barbican | key-manager | True | public | http://10.10.10.2:9311 | | da935387386442b995b1812b307e228a | RegionOne | sysinv | platform | True | admin | http://192.168.204.2:6385/v1 | | d135073f7952407fbccc113e2ebfc296 | RegionOne | sysinv | platform | True | internal | http://192.168.204.2:6385/v1 | | b54be3ccb9804af0bfc579a12ea5afcc | RegionOne | sysinv | platform | True | public | http://10.10.10.2:6385/v1 | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ any suggestions? greez & thx, volker... ________________________________________ Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] Gesendet: Montag, 2. Dezember 2019 05:52 An: von Hoesslin, Volker; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: the way to enable swift in starlingx Hi system application-upload system service-parameter-modify radosgw config service_enabled=true system service-parameter-apply radosgw system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true system application-apply stx-openstack openstack endpoint list | grep object BR! -----Original Message----- From: Chen, Haochuan Z Sent: Thursday, November 28, 2019 10:12 AM To: starlingx-discuss at lists.starlingx.io Cc: Volker.Hoesslin at swsn.de; ji at sibyl.li Subject: swift enabling Hi I find there is voice to enable swift, why it was removed? Thanks! Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Message: 3 Date: Fri, 22 Nov 2019 14:14:15 +0000 From: "von Hoesslin, Volker" To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] STX 2.0: swift? Message-ID: Content-Type: text/plain; charset="iso-8859-1" hi, how can i add the object storage (swift) feature to my current STX2.0 openstack? BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io Sent: Tuesday, November 26, 2019 12:05 AM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 18, Issue 150 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. Re: controller filesystem (Waines, Greg) 2. Enabling Object Storage (joji vlogs) 3. Re: StarlingX 2.0 Account Locked for User (Andy Ning) ---------------------------------------------------------------------- Message: 1 Date: Mon, 25 Nov 2019 13:26:31 +0000 From: "Waines, Greg" To: Saul Wold , "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Message-ID: Content-Type: text/plain; charset="utf-8" Actually a section on managing filesystems on the controllers is planned for STX 3.0 ... based on the proposed TOC for the new Operations Guide. Greg From: Saul Wold Date: Thursday, November 21, 2019 at 12:13 PM To: "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Kristal: We probably need a Story/Task added to the documentation to get the documentation of host-fs added in the right place. This helps with filesystem re-sizing. Maybe it's already there. Sau! On 11/21/19 7:12 AM, von Hoesslin, Volker wrote: incredible !!! thats the point i have search! big thx! ------------------------------------------------------------------------ *Von:* Sun, Austin [austin.sun at intel.com] *Gesendet:* Donnerstag, 21. November 2019 15:20 *An:* von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io *Betreff:* Re: [Starlingx-discuss] controller filesystem Hi Volker: From the email chain, http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005656.html You can probably the command line to change the size of docker lv Thanks. BR Austin Sun. *From:* von Hoesslin, Volker > *Sent:* Thursday, November 21, 2019 9:20 PM *To:* starlingx-discuss at lists.starlingx.io *Subject:* [Starlingx-discuss] controller filesystem hi, i trying to import some existing qcow2 images into my new installed STX 2.0. openstack image create --file /media/foobar.qcow2 --private --unprotected --disk-format qcow2 "foobar" all works fine, the image are available. now, i'm trying to create an new volume based on this images. if the images are <=6GB all works fine, but some images are very huge (10-200GB) and then it ends in an error. after some research i can see on controller the mount point /dev/mapper/cgts--vg-docker--lv 30G 11G 20G 35% /var/lib/docker increase the used storage. after fail "volume create" it goes back to given value 35%. in my oppinion, this mount point should resize to some value about 300-400GB, but how? in STX horizon backend (http://10.10.10.2:8080/admin/system_config/?tab=system_config_tab__storage_table) there is an "docker-distribution", but no docker-mount-point itself? btw, if i try to change the "docker-distribution" value to some other value (eg. 500GB via horizon backend), i got this error: *Error: *backup size of 60 is insufficient for host controller-1. Minimum backup size of 100 is required based upon glance size 20 and database size 20. Rejecting modification request. - see attachment - how can i increase the backup size to handle this error?! greez & thx, volker... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Austin Gillmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 25 Nov 2019 11:05:19 -0500 From: Andy Ning To: Subject: Re: [Starlingx-discuss] StarlingX 2.0 Account Locked for User Message-ID: <8f8e6cad-0ad7-4be4-6693-a25e6cec93ad at windriver.com> Content-Type: text/plain; charset="utf-8"; format=flowed On 2019-11-25 12:00 AM, Yong Hu wrote: > Hi Anirudh, > > This issue is similar to this LP#1853017 [0], which was triggered the > account locking if the password of Openstack user "admin" was changed. > In this LP, for unknown reason, "registry-token-server" daemon kept > accessing "keystone" on host (not the instance in containers) with > obsolete token and led to "admin" account locked after 5 attempts. > If registry token server keeps on accessing keystone with obsolete token, this could be a bug in the token server. Normally if a keystone client get a failed authentication, it should try to retrieve a new token by using username/password. Andy > Right now, I am debugging this issue. > Good thing is Cengn build 11/16 seemed not to have such a problem > (LP#1853017). > > You might have a try with this version? > BTW: which cengn build were you using? > > > [0] https://bugs.launchpad.net/starlingx/+bug/1853017 > > regards, > Yong > > On 2019/11/19 6:06 PM, Anirudh Gupta wrote: >> Hi Team, >> >> I have installed StarlingX 2.0 Duplex Bare Metal. >> >> I am trying to create 2 VM's and repeating this cycle a number of times. >> >> After using the setup for around half and hour, I am not longer able >> to access the GUI. >> >> Ever though I type correct Username/Password, it gives an error of >> Invalid Credentials. >> >> Then, I thought to use the CLI commands by following LOAD CLI section >> given in link >> >> https://docs.starlingx.io/deploy_install_guides/r2_release/openstack/ >> access.html#local-cli >> >> >> But with this also, I am facing the same error >> >> controller-0:~$ export OS_CLOUD=openstack_helm >> >> controller-0:~$ openstack endpoint list >> >> The account is locked for user: 230578cde382430a8adac399afab1230. >> (HTTP 401) (Request-ID: req-6da6d59a-2edd-4f2b-a8bf-f13f2e423a77) >> >> Earlier this automatically started working after 5-7 mins. >> >> But this time, I am completely Blocked. >> >> I have also raised a bug regarding the same >> >> https://bugs.launchpad.net/starlingx/+bug/1853093 >> >> Please suggest some pointers, so that I can unblock and resume my >> activities. >> >> Regards >> >> Anirudh Gupta >> >> DISCLAIMER: This electronic message and all of its contents, contains >> information which is privileged, confidential or otherwise protected >> from disclosure. The information contained in this electronic mail >> transmission is intended for use only by the individual or entity to >> which it is addressed. If you are not the intended recipient or may >> have received this electronic mail transmission in error, please >> notify the sender immediately and delete / destroy all copies of this >> electronic mail transmission without disclosing, copying, >> distributing, forwarding, printing or retaining any part of it. >> Hughes Systique accepts no responsibility for loss or damage arising >> from the use of the information transmitted by this email including >> damage from virus. >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 18, Issue 150 ************************************************** _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Volker.Hoesslin at swsn.de Tue Dec 3 08:08:10 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Tue, 3 Dec 2019 08:08:10 +0000 Subject: [Starlingx-discuss] the way to enable swift in starlingx In-Reply-To: <2at5p001dm3gsa8l@shdsegapp1> References: <56829C2A36C2E542B0CCB9854828E4D85628A95E@CDSMSX102.ccr.corp.intel.com> , <2at5p001dm3gsa8l@shdsegapp1> Message-ID: i also miss the GUI inside the openstack dashboard (horizon), can i active this one? volker ________________________________________ Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] Gesendet: Montag, 2. Dezember 2019 16:26 An: von Hoesslin, Volker; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: Re: [Starlingx-discuss] the way to enable swift in starlingx You should check in such way [sysadmin at controller-0 ~(keystone_admin)]$ openstack --os-username 'admin' --os-password 'Local.123' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-identity-api-version 3 --os-interface internal --os-region-name RegionOne endpoint list | grep object | a67918db88604bcc87504c3ec72c745c | RegionOne | swift | object-store | True | public | http://10.10.10.3:7480/swift/v1 | | d0974532671d457995ccd5c8e2c5f5eb | RegionOne | swift | object-store | True | admin | http://192.168.204.1:7480/swift/v1 | | e096598cdbec45878240aa5ac75e2047 | RegionOne | swift | object-store | True | internal | http://192.168.204.1:7480/swift/v1 | [sysadmin at controller-0 ~(keystone_admin)]$ BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: von Hoesslin, Volker Sent: Monday, December 2, 2019 9:00 PM To: Chen, Haochuan Z ; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io' ; Hu, Yong Subject: AW: the way to enable swift in starlingx hi, thx for this code-snippets but doesnt work for me :( there are no errors but also no new endpoint... controller-0:~$ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-modify radosgw config service_enabled=true +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | ee200d06-d800-4dfa-83aa-b35d1fde61f6 | | service | radosgw | | section | config | | name | service_enabled | | value | true | | personality | None | | resource | None | +-------------+--------------------------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-apply radosgw Applying radosgw service parameters [sysadmin at controller-0 ~(keystone_admin)]$ system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true +------------+--------------------+ | Property | Value | +------------+--------------------+ | attributes | {u'enabled': True} | | name | ceph-rgw | | namespace | openstack | +------------+--------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | True | | app_version | 1.0-17-centos-stable-latest | | created_at | 2019-11-18T16:21:19.076937+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2019-11-28T16:37:31.859690+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. [sysadmin at controller-0 ~(keystone_admin)]$ watch system application-show stx-openstack [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list | grep object [sysadmin at controller-0 ~(keystone_admin)]$ [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | f279f864c46e469cafa16fd77d0605b0 | RegionOne | fm | faultmanagement | True | admin | http://192.168.204.2:18002 | | aad37714717540e49af212768daf9258 | RegionOne | fm | faultmanagement | True | internal | http://192.168.204.2:18002 | | 7a7a980041e94eda8e0575ec7f6472c2 | RegionOne | fm | faultmanagement | True | public | http://10.10.10.2:18002 | | d7765db9214249e2ae97958043521061 | RegionOne | patching | patching | True | admin | http://192.168.204.2:5491 | | 678a5de6ff9044afacd145fbdad234a1 | RegionOne | patching | patching | True | internal | http://192.168.204.2:5491 | | 2c6e5794267844f4af577bc71183e0f7 | RegionOne | patching | patching | True | public | http://10.10.10.2:15491 | | b21e663dd3894ebb8465b4b8418f2a47 | RegionOne | vim | nfv | True | admin | http://192.168.204.2:4545 | | 185182d4b1e740ee9e6311a55791fed7 | RegionOne | vim | nfv | True | internal | http://192.168.204.2:4545 | | 63d6b149358b475eb97d97bd3414048e | RegionOne | vim | nfv | True | public | http://10.10.10.2:4545 | | 2cc7741368cd4ca1920322f601ece48c | RegionOne | smapi | smapi | True | admin | http://192.168.204.2:7777 | | 9bd331db8f9d455ab06ef7dc4dc79660 | RegionOne | smapi | smapi | True | internal | http://192.168.204.2:7777 | | bae82a84d6e94cd198ec2c73b2dba2c0 | RegionOne | smapi | smapi | True | public | http://10.10.10.2:7777 | | 594394a663c64fe484c097fcfbf8b2db | RegionOne | keystone | identity | True | admin | http://192.168.204.2:5000/v3 | | 3804a32d209a4a929cff8321c408fe5e | RegionOne | keystone | identity | True | internal | http://192.168.204.2:5000/v3 | | 69f30354ea1c407480de2a70719f65d5 | RegionOne | keystone | identity | True | public | http://10.10.10.2:5000/v3 | | 0a5686180b104c708e741048a5ddca86 | RegionOne | barbican | key-manager | True | admin | http://192.168.204.2:9311 | | b8243821ca9443b2ab1036a35157ba73 | RegionOne | barbican | key-manager | True | internal | http://192.168.204.2:9311 | | d7284357f1374940bee22a6207152f39 | RegionOne | barbican | key-manager | True | public | http://10.10.10.2:9311 | | da935387386442b995b1812b307e228a | RegionOne | sysinv | platform | True | admin | http://192.168.204.2:6385/v1 | | d135073f7952407fbccc113e2ebfc296 | RegionOne | sysinv | platform | True | internal | http://192.168.204.2:6385/v1 | | b54be3ccb9804af0bfc579a12ea5afcc | RegionOne | sysinv | platform | True | public | http://10.10.10.2:6385/v1 | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ any suggestions? greez & thx, volker... ________________________________________ Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] Gesendet: Montag, 2. Dezember 2019 05:52 An: von Hoesslin, Volker; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: the way to enable swift in starlingx Hi system application-upload system service-parameter-modify radosgw config service_enabled=true system service-parameter-apply radosgw system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true system application-apply stx-openstack openstack endpoint list | grep object BR! -----Original Message----- From: Chen, Haochuan Z Sent: Thursday, November 28, 2019 10:12 AM To: starlingx-discuss at lists.starlingx.io Cc: Volker.Hoesslin at swsn.de; ji at sibyl.li Subject: swift enabling Hi I find there is voice to enable swift, why it was removed? Thanks! Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Message: 3 Date: Fri, 22 Nov 2019 14:14:15 +0000 From: "von Hoesslin, Volker" To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] STX 2.0: swift? Message-ID: Content-Type: text/plain; charset="iso-8859-1" hi, how can i add the object storage (swift) feature to my current STX2.0 openstack? BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io Sent: Tuesday, November 26, 2019 12:05 AM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 18, Issue 150 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. Re: controller filesystem (Waines, Greg) 2. Enabling Object Storage (joji vlogs) 3. Re: StarlingX 2.0 Account Locked for User (Andy Ning) ---------------------------------------------------------------------- Message: 1 Date: Mon, 25 Nov 2019 13:26:31 +0000 From: "Waines, Greg" To: Saul Wold , "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Message-ID: Content-Type: text/plain; charset="utf-8" Actually a section on managing filesystems on the controllers is planned for STX 3.0 ... based on the proposed TOC for the new Operations Guide. Greg From: Saul Wold Date: Thursday, November 21, 2019 at 12:13 PM To: "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Kristal: We probably need a Story/Task added to the documentation to get the documentation of host-fs added in the right place. This helps with filesystem re-sizing. Maybe it's already there. Sau! On 11/21/19 7:12 AM, von Hoesslin, Volker wrote: incredible !!! thats the point i have search! big thx! ------------------------------------------------------------------------ *Von:* Sun, Austin [austin.sun at intel.com] *Gesendet:* Donnerstag, 21. November 2019 15:20 *An:* von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io *Betreff:* Re: [Starlingx-discuss] controller filesystem Hi Volker: From the email chain, http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005656.html You can probably the command line to change the size of docker lv Thanks. BR Austin Sun. *From:* von Hoesslin, Volker > *Sent:* Thursday, November 21, 2019 9:20 PM *To:* starlingx-discuss at lists.starlingx.io *Subject:* [Starlingx-discuss] controller filesystem hi, i trying to import some existing qcow2 images into my new installed STX 2.0. openstack image create --file /media/foobar.qcow2 --private --unprotected --disk-format qcow2 "foobar" all works fine, the image are available. now, i'm trying to create an new volume based on this images. if the images are <=6GB all works fine, but some images are very huge (10-200GB) and then it ends in an error. after some research i can see on controller the mount point /dev/mapper/cgts--vg-docker--lv 30G 11G 20G 35% /var/lib/docker increase the used storage. after fail "volume create" it goes back to given value 35%. in my oppinion, this mount point should resize to some value about 300-400GB, but how? in STX horizon backend (http://10.10.10.2:8080/admin/system_config/?tab=system_config_tab__storage_table) there is an "docker-distribution", but no docker-mount-point itself? btw, if i try to change the "docker-distribution" value to some other value (eg. 500GB via horizon backend), i got this error: *Error: *backup size of 60 is insufficient for host controller-1. Minimum backup size of 100 is required based upon glance size 20 and database size 20. Rejecting modification request. - see attachment - how can i increase the backup size to handle this error?! greez & thx, volker... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Austin Gillmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 25 Nov 2019 11:05:19 -0500 From: Andy Ning To: Subject: Re: [Starlingx-discuss] StarlingX 2.0 Account Locked for User Message-ID: <8f8e6cad-0ad7-4be4-6693-a25e6cec93ad at windriver.com> Content-Type: text/plain; charset="utf-8"; format=flowed On 2019-11-25 12:00 AM, Yong Hu wrote: > Hi Anirudh, > > This issue is similar to this LP#1853017 [0], which was triggered the > account locking if the password of Openstack user "admin" was changed. > In this LP, for unknown reason, "registry-token-server" daemon kept > accessing "keystone" on host (not the instance in containers) with > obsolete token and led to "admin" account locked after 5 attempts. > If registry token server keeps on accessing keystone with obsolete token, this could be a bug in the token server. Normally if a keystone client get a failed authentication, it should try to retrieve a new token by using username/password. Andy > Right now, I am debugging this issue. > Good thing is Cengn build 11/16 seemed not to have such a problem > (LP#1853017). > > You might have a try with this version? > BTW: which cengn build were you using? > > > [0] https://bugs.launchpad.net/starlingx/+bug/1853017 > > regards, > Yong > > On 2019/11/19 6:06 PM, Anirudh Gupta wrote: >> Hi Team, >> >> I have installed StarlingX 2.0 Duplex Bare Metal. >> >> I am trying to create 2 VM's and repeating this cycle a number of times. >> >> After using the setup for around half and hour, I am not longer able >> to access the GUI. >> >> Ever though I type correct Username/Password, it gives an error of >> Invalid Credentials. >> >> Then, I thought to use the CLI commands by following LOAD CLI section >> given in link >> >> https://docs.starlingx.io/deploy_install_guides/r2_release/openstack/ >> access.html#local-cli >> >> >> But with this also, I am facing the same error >> >> controller-0:~$ export OS_CLOUD=openstack_helm >> >> controller-0:~$ openstack endpoint list >> >> The account is locked for user: 230578cde382430a8adac399afab1230. >> (HTTP 401) (Request-ID: req-6da6d59a-2edd-4f2b-a8bf-f13f2e423a77) >> >> Earlier this automatically started working after 5-7 mins. >> >> But this time, I am completely Blocked. >> >> I have also raised a bug regarding the same >> >> https://bugs.launchpad.net/starlingx/+bug/1853093 >> >> Please suggest some pointers, so that I can unblock and resume my >> activities. >> >> Regards >> >> Anirudh Gupta >> >> DISCLAIMER: This electronic message and all of its contents, contains >> information which is privileged, confidential or otherwise protected >> from disclosure. The information contained in this electronic mail >> transmission is intended for use only by the individual or entity to >> which it is addressed. If you are not the intended recipient or may >> have received this electronic mail transmission in error, please >> notify the sender immediately and delete / destroy all copies of this >> electronic mail transmission without disclosing, copying, >> distributing, forwarding, printing or retaining any part of it. >> Hughes Systique accepts no responsibility for loss or damage arising >> from the use of the information transmitted by this email including >> damage from virus. >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 18, Issue 150 ************************************************** _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From haochuan.z.chen at intel.com Tue Dec 3 08:18:34 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 3 Dec 2019 08:18:34 +0000 Subject: [Starlingx-discuss] ceph ops enabling in sysinv-conductor In-Reply-To: References: <56829C2A36C2E542B0CCB9854828E4D85628A993@CDSMSX102.ccr.corp.intel.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628AC31@CDSMSX102.ccr.corp.intel.com> Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z Cc: 'starlingx-discuss at lists.starlingx.io' ; Poncea, Ovidiu Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Tue Dec 3 08:40:46 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Tue, 3 Dec 2019 08:40:46 +0000 Subject: [Starlingx-discuss] the way to enable swift in starlingx In-Reply-To: References: <56829C2A36C2E542B0CCB9854828E4D85628A95E@CDSMSX102.ccr.corp.intel.com> , <2at5p001dm3gsa8l@shdsegapp1>, Message-ID: ok, just ignore the last question, the GUI is appeared. but if i try to click onto the menu link "Project -> Object Store -> Containers", the new interface is shown shortly, some red error bubbels appear and direkt redirect to the openstack-dashboard login screen. maybe i missed some rights from keystone? i have tried this with admin account. GET http://10.10.10.2:31000/api/swift/containers/ "not logged in" seems that something is missing, also the CLI cant handle the default way? controller-0:~$ cat /etc/openstack/openrc export OS_CLOUD=openstack_helm controller-0:~$ source /etc/openstack/openrc controller-0:~$ swift list Auth version 1.0 requires ST_AUTH, ST_USER, and ST_KEY environment variables to be set or overridden with -A, -U, or -K. Auth version 2.0 requires OS_AUTH_URL, OS_USERNAME, OS_PASSWORD, and OS_TENANT_NAME OS_TENANT_ID to be set or overridden with --os-auth-url, --os-username, --os-password, --os-tenant-name or os-tenant-id. Note: adding "-V 2" is necessary for this. controller-0:~$ swift list -V 2 Auth version 1.0 requires ST_AUTH, ST_USER, and ST_KEY environment variables to be set or overridden with -A, -U, or -K. Auth version 2.0 requires OS_AUTH_URL, OS_USERNAME, OS_PASSWORD, and OS_TENANT_NAME OS_TENANT_ID to be set or overridden with --os-auth-url, --os-username, --os-password, --os-tenant-name or os-tenant-id. Note: adding "-V 2" is necessary for this. controller-0:~$ openstack container list Unauthorized (HTTP 401) (Request-ID: tx00000000000000000001a-005de61f4f-4e9fb6-default) any sugestions? volker. ________________________________________ Von: von Hoesslin, Volker Gesendet: Dienstag, 3. Dezember 2019 09:08 An: Chen, Haochuan Z; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: AW: the way to enable swift in starlingx i also miss the GUI inside the openstack dashboard (horizon), can i active this one? volker ________________________________________ Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] Gesendet: Montag, 2. Dezember 2019 16:26 An: von Hoesslin, Volker; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: Re: [Starlingx-discuss] the way to enable swift in starlingx You should check in such way [sysadmin at controller-0 ~(keystone_admin)]$ openstack --os-username 'admin' --os-password 'Local.123' --os-project-name admin --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-user-domain-name Default --os-project-domain-name Default --os-identity-api-version 3 --os-interface internal --os-region-name RegionOne endpoint list | grep object | a67918db88604bcc87504c3ec72c745c | RegionOne | swift | object-store | True | public | http://10.10.10.3:7480/swift/v1 | | d0974532671d457995ccd5c8e2c5f5eb | RegionOne | swift | object-store | True | admin | http://192.168.204.1:7480/swift/v1 | | e096598cdbec45878240aa5ac75e2047 | RegionOne | swift | object-store | True | internal | http://192.168.204.1:7480/swift/v1 | [sysadmin at controller-0 ~(keystone_admin)]$ BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: von Hoesslin, Volker Sent: Monday, December 2, 2019 9:00 PM To: Chen, Haochuan Z ; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io' ; Hu, Yong Subject: AW: the way to enable swift in starlingx hi, thx for this code-snippets but doesnt work for me :( there are no errors but also no new endpoint... controller-0:~$ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-modify radosgw config service_enabled=true +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | uuid | ee200d06-d800-4dfa-83aa-b35d1fde61f6 | | service | radosgw | | section | config | | name | service_enabled | | value | true | | personality | None | | resource | None | +-------------+--------------------------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system service-parameter-apply radosgw Applying radosgw service parameters [sysadmin at controller-0 ~(keystone_admin)]$ system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true +------------+--------------------+ | Property | Value | +------------+--------------------+ | attributes | {u'enabled': True} | | name | ceph-rgw | | namespace | openstack | +------------+--------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system application-apply stx-openstack +---------------+----------------------------------+ | Property | Value | +---------------+----------------------------------+ | active | True | | app_version | 1.0-17-centos-stable-latest | | created_at | 2019-11-18T16:21:19.076937+00:00 | | manifest_file | stx-openstack.yaml | | manifest_name | armada-manifest | | name | stx-openstack | | progress | None | | status | applying | | updated_at | 2019-11-28T16:37:31.859690+00:00 | +---------------+----------------------------------+ Please use 'system application-list' or 'system application-show stx-openstack' to view the current progress. [sysadmin at controller-0 ~(keystone_admin)]$ watch system application-show stx-openstack [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list | grep object [sysadmin at controller-0 ~(keystone_admin)]$ [sysadmin at controller-0 ~(keystone_admin)]$ openstack endpoint list +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ | f279f864c46e469cafa16fd77d0605b0 | RegionOne | fm | faultmanagement | True | admin | http://192.168.204.2:18002 | | aad37714717540e49af212768daf9258 | RegionOne | fm | faultmanagement | True | internal | http://192.168.204.2:18002 | | 7a7a980041e94eda8e0575ec7f6472c2 | RegionOne | fm | faultmanagement | True | public | http://10.10.10.2:18002 | | d7765db9214249e2ae97958043521061 | RegionOne | patching | patching | True | admin | http://192.168.204.2:5491 | | 678a5de6ff9044afacd145fbdad234a1 | RegionOne | patching | patching | True | internal | http://192.168.204.2:5491 | | 2c6e5794267844f4af577bc71183e0f7 | RegionOne | patching | patching | True | public | http://10.10.10.2:15491 | | b21e663dd3894ebb8465b4b8418f2a47 | RegionOne | vim | nfv | True | admin | http://192.168.204.2:4545 | | 185182d4b1e740ee9e6311a55791fed7 | RegionOne | vim | nfv | True | internal | http://192.168.204.2:4545 | | 63d6b149358b475eb97d97bd3414048e | RegionOne | vim | nfv | True | public | http://10.10.10.2:4545 | | 2cc7741368cd4ca1920322f601ece48c | RegionOne | smapi | smapi | True | admin | http://192.168.204.2:7777 | | 9bd331db8f9d455ab06ef7dc4dc79660 | RegionOne | smapi | smapi | True | internal | http://192.168.204.2:7777 | | bae82a84d6e94cd198ec2c73b2dba2c0 | RegionOne | smapi | smapi | True | public | http://10.10.10.2:7777 | | 594394a663c64fe484c097fcfbf8b2db | RegionOne | keystone | identity | True | admin | http://192.168.204.2:5000/v3 | | 3804a32d209a4a929cff8321c408fe5e | RegionOne | keystone | identity | True | internal | http://192.168.204.2:5000/v3 | | 69f30354ea1c407480de2a70719f65d5 | RegionOne | keystone | identity | True | public | http://10.10.10.2:5000/v3 | | 0a5686180b104c708e741048a5ddca86 | RegionOne | barbican | key-manager | True | admin | http://192.168.204.2:9311 | | b8243821ca9443b2ab1036a35157ba73 | RegionOne | barbican | key-manager | True | internal | http://192.168.204.2:9311 | | d7284357f1374940bee22a6207152f39 | RegionOne | barbican | key-manager | True | public | http://10.10.10.2:9311 | | da935387386442b995b1812b307e228a | RegionOne | sysinv | platform | True | admin | http://192.168.204.2:6385/v1 | | d135073f7952407fbccc113e2ebfc296 | RegionOne | sysinv | platform | True | internal | http://192.168.204.2:6385/v1 | | b54be3ccb9804af0bfc579a12ea5afcc | RegionOne | sysinv | platform | True | public | http://10.10.10.2:6385/v1 | +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------+ any suggestions? greez & thx, volker... ________________________________________ Von: Chen, Haochuan Z [haochuan.z.chen at intel.com] Gesendet: Montag, 2. Dezember 2019 05:52 An: von Hoesslin, Volker; 'ji at sibyl.li' Cc: 'starlingx-discuss at lists.starlingx.io'; Hu, Yong Betreff: the way to enable swift in starlingx Hi system application-upload system service-parameter-modify radosgw config service_enabled=true system service-parameter-apply radosgw system helm-chart-attribute-modify stx-openstack ceph-rgw openstack --enabled=true system application-apply stx-openstack openstack endpoint list | grep object BR! -----Original Message----- From: Chen, Haochuan Z Sent: Thursday, November 28, 2019 10:12 AM To: starlingx-discuss at lists.starlingx.io Cc: Volker.Hoesslin at swsn.de; ji at sibyl.li Subject: swift enabling Hi I find there is voice to enable swift, why it was removed? Thanks! Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Message: 3 Date: Fri, 22 Nov 2019 14:14:15 +0000 From: "von Hoesslin, Volker" To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] STX 2.0: swift? Message-ID: Content-Type: text/plain; charset="iso-8859-1" hi, how can i add the object storage (swift) feature to my current STX2.0 openstack? BR! Martin, Chen SSP, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io Sent: Tuesday, November 26, 2019 12:05 AM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 18, Issue 150 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. Re: controller filesystem (Waines, Greg) 2. Enabling Object Storage (joji vlogs) 3. Re: StarlingX 2.0 Account Locked for User (Andy Ning) ---------------------------------------------------------------------- Message: 1 Date: Mon, 25 Nov 2019 13:26:31 +0000 From: "Waines, Greg" To: Saul Wold , "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Message-ID: Content-Type: text/plain; charset="utf-8" Actually a section on managing filesystems on the controllers is planned for STX 3.0 ... based on the proposed TOC for the new Operations Guide. Greg From: Saul Wold Date: Thursday, November 21, 2019 at 12:13 PM To: "von Hoesslin, Volker" , "Sun, Austin" , "starlingx-discuss at lists.starlingx.io" , "Dale, Kristal" Subject: Re: [Starlingx-discuss] controller filesystem Kristal: We probably need a Story/Task added to the documentation to get the documentation of host-fs added in the right place. This helps with filesystem re-sizing. Maybe it's already there. Sau! On 11/21/19 7:12 AM, von Hoesslin, Volker wrote: incredible !!! thats the point i have search! big thx! ------------------------------------------------------------------------ *Von:* Sun, Austin [austin.sun at intel.com] *Gesendet:* Donnerstag, 21. November 2019 15:20 *An:* von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io *Betreff:* Re: [Starlingx-discuss] controller filesystem Hi Volker: From the email chain, http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005656.html You can probably the command line to change the size of docker lv Thanks. BR Austin Sun. *From:* von Hoesslin, Volker > *Sent:* Thursday, November 21, 2019 9:20 PM *To:* starlingx-discuss at lists.starlingx.io *Subject:* [Starlingx-discuss] controller filesystem hi, i trying to import some existing qcow2 images into my new installed STX 2.0. openstack image create --file /media/foobar.qcow2 --private --unprotected --disk-format qcow2 "foobar" all works fine, the image are available. now, i'm trying to create an new volume based on this images. if the images are <=6GB all works fine, but some images are very huge (10-200GB) and then it ends in an error. after some research i can see on controller the mount point /dev/mapper/cgts--vg-docker--lv 30G 11G 20G 35% /var/lib/docker increase the used storage. after fail "volume create" it goes back to given value 35%. in my oppinion, this mount point should resize to some value about 300-400GB, but how? in STX horizon backend (http://10.10.10.2:8080/admin/system_config/?tab=system_config_tab__storage_table) there is an "docker-distribution", but no docker-mount-point itself? btw, if i try to change the "docker-distribution" value to some other value (eg. 500GB via horizon backend), i got this error: *Error: *backup size of 60 is insufficient for host controller-1. Minimum backup size of 100 is required based upon glance size 20 and database size 20. Rejecting modification request. - see attachment - how can i increase the backup size to handle this error?! greez & thx, volker... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 25 Nov 2019 07:53:31 -0600 From: joji vlogs To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Enabling Object Storage Message-ID: Content-Type: text/plain; charset="utf-8" Hi all, I recently did a test deployment based on a 2+2+2 setup with one minor difference being I only have one storage server on hand. My question is how would I go about enabling object storage? I saw a reference to it in the helm charts and attempted to set an override but the APIs would not be listening for requests and a "unable to get swift service info" error. Thank you and have a great day! Austin Gillmann -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 25 Nov 2019 11:05:19 -0500 From: Andy Ning To: Subject: Re: [Starlingx-discuss] StarlingX 2.0 Account Locked for User Message-ID: <8f8e6cad-0ad7-4be4-6693-a25e6cec93ad at windriver.com> Content-Type: text/plain; charset="utf-8"; format=flowed On 2019-11-25 12:00 AM, Yong Hu wrote: > Hi Anirudh, > > This issue is similar to this LP#1853017 [0], which was triggered the > account locking if the password of Openstack user "admin" was changed. > In this LP, for unknown reason, "registry-token-server" daemon kept > accessing "keystone" on host (not the instance in containers) with > obsolete token and led to "admin" account locked after 5 attempts. > If registry token server keeps on accessing keystone with obsolete token, this could be a bug in the token server. Normally if a keystone client get a failed authentication, it should try to retrieve a new token by using username/password. Andy > Right now, I am debugging this issue. > Good thing is Cengn build 11/16 seemed not to have such a problem > (LP#1853017). > > You might have a try with this version? > BTW: which cengn build were you using? > > > [0] https://bugs.launchpad.net/starlingx/+bug/1853017 > > regards, > Yong > > On 2019/11/19 6:06 PM, Anirudh Gupta wrote: >> Hi Team, >> >> I have installed StarlingX 2.0 Duplex Bare Metal. >> >> I am trying to create 2 VM's and repeating this cycle a number of times. >> >> After using the setup for around half and hour, I am not longer able >> to access the GUI. >> >> Ever though I type correct Username/Password, it gives an error of >> Invalid Credentials. >> >> Then, I thought to use the CLI commands by following LOAD CLI section >> given in link >> >> https://docs.starlingx.io/deploy_install_guides/r2_release/openstack/ >> access.html#local-cli >> >> >> But with this also, I am facing the same error >> >> controller-0:~$ export OS_CLOUD=openstack_helm >> >> controller-0:~$ openstack endpoint list >> >> The account is locked for user: 230578cde382430a8adac399afab1230. >> (HTTP 401) (Request-ID: req-6da6d59a-2edd-4f2b-a8bf-f13f2e423a77) >> >> Earlier this automatically started working after 5-7 mins. >> >> But this time, I am completely Blocked. >> >> I have also raised a bug regarding the same >> >> https://bugs.launchpad.net/starlingx/+bug/1853093 >> >> Please suggest some pointers, so that I can unblock and resume my >> activities. >> >> Regards >> >> Anirudh Gupta >> >> DISCLAIMER: This electronic message and all of its contents, contains >> information which is privileged, confidential or otherwise protected >> from disclosure. The information contained in this electronic mail >> transmission is intended for use only by the individual or entity to >> which it is addressed. If you are not the intended recipient or may >> have received this electronic mail transmission in error, please >> notify the sender immediately and delete / destroy all copies of this >> electronic mail transmission without disclosing, copying, >> distributing, forwarding, printing or retaining any part of it. >> Hughes Systique accepts no responsibility for loss or damage arising >> from the use of the information transmitted by this email including >> damage from virus. >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 18, Issue 150 ************************************************** _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From mingyuan.qi at intel.com Tue Dec 3 08:56:12 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Tue, 3 Dec 2019 08:56:12 +0000 Subject: [Starlingx-discuss] [Container]Introduce a user application generation tool Message-ID: Hi Container team, In the past few months, I've developed more than 10 armada user applications on stx. However, I've been struggling to create these apps by building the dir hierarchy, writing armada manifest and fighting against stx build errors/runtime sysinv errors/armada errors. There are good reasons for stx system application(e.g. platform-integ-apps) to leverage rpm build model to ensure the consistency of the build system. But for customer's applications that are not in-tree code, flexible to integrate various helm charts, no runtime override needed, this build model is more of a burden. Therefore, I developed a user application generation tool aiming to simplify the app development steps. This tool completely decouples app development from stx build, which means the app developers no longer need to fetch stx code/build tool nor to build app by stx build system. The main features of this tool are: 1. One command to package chart, generate manifest, checksum and package app. 2. Supports local dir, git repo and tarball as chart source. 3. The app manifest abstracts a few important fields from armada schema for user to lower the learning curve of armada. 4. Static value overrides allowed in app manifest I've submitted a draft version of this tool for review[0], and created an etherpad[1] to describe more details. I'd like to know your thoughts about this tool and anything about user application development, feel free to review the commit[0] and/or add comments in etherpad[1] or mailing list. All opinions are welcome. [0] https://review.opendev.org/#/c/697013/ [1] https://etherpad.openstack.org/p/stx_app_gen_tool Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dariush.Eslimi at windriver.com Tue Dec 3 13:24:29 2019 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Tue, 3 Dec 2019 13:24:29 +0000 Subject: [Starlingx-discuss] [Container]Introduce a user application generation tool In-Reply-To: References: Message-ID: Very cool idea, Thanks Mingyuan. From: Qi, Mingyuan [mailto:mingyuan.qi at intel.com] Sent: December-03-19 3:56 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Container]Introduce a user application generation tool Hi Container team, In the past few months, I've developed more than 10 armada user applications on stx. However, I've been struggling to create these apps by building the dir hierarchy, writing armada manifest and fighting against stx build errors/runtime sysinv errors/armada errors. There are good reasons for stx system application(e.g. platform-integ-apps) to leverage rpm build model to ensure the consistency of the build system. But for customer's applications that are not in-tree code, flexible to integrate various helm charts, no runtime override needed, this build model is more of a burden. Therefore, I developed a user application generation tool aiming to simplify the app development steps. This tool completely decouples app development from stx build, which means the app developers no longer need to fetch stx code/build tool nor to build app by stx build system. The main features of this tool are: 1. One command to package chart, generate manifest, checksum and package app. 2. Supports local dir, git repo and tarball as chart source. 3. The app manifest abstracts a few important fields from armada schema for user to lower the learning curve of armada. 4. Static value overrides allowed in app manifest I've submitted a draft version of this tool for review[0], and created an etherpad[1] to describe more details. I'd like to know your thoughts about this tool and anything about user application development, feel free to review the commit[0] and/or add comments in etherpad[1] or mailing list. All opinions are welcome. [0] https://review.opendev.org/#/c/697013/ [1] https://etherpad.openstack.org/p/stx_app_gen_tool Best Regards, Mingyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Dec 3 15:34:14 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 3 Dec 2019 10:34:14 -0500 Subject: [Starlingx-discuss] master branch 20191203T000000Z load is broken Message-ID: <9fc67b71-fe78-7e98-dcd5-f28bb5d12d2e@windriver.com> The CENGN master branch build 20191203T000000Z is unusable. I had a multi-part update to change to SW_VERSION to 20.01 and failed to add the required depends on.  On part merged and another didn't. We are now seeing install failures based on mismatched SW_VERSION. Any master branch builds based on a code snapshots between 2019/12/02 21:07:00 UTC and 2019/12/03 15:30:00 UTC will also be broken The missing part has merged and a new CENGN build is underway. With apologies ... Scott From Robert.Church at windriver.com Tue Dec 3 16:08:35 2019 From: Robert.Church at windriver.com (Church, Robert) Date: Tue, 3 Dec 2019 16:08:35 +0000 Subject: [Starlingx-discuss] ceph ops enabling in sysinv-conductor In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D85628AC31@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D85628A993@CDSMSX102.ccr.corp.intel.com> <56829C2A36C2E542B0CCB9854828E4D85628AC31@CDSMSX102.ccr.corp.intel.com> Message-ID: <82EBE26D-C9F9-4425-B5AE-9FEF1E74BC65@windriver.com> See inline… From: "Chen, Haochuan Z" Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church Cc: "'starlingx-discuss at lists.starlingx.io'" , Ovidiu Poncea Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z Cc: 'starlingx-discuss at lists.starlingx.io' ; Poncea, Ovidiu Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Mon Dec 2 10:59:09 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Mon, 2 Dec 2019 10:59:09 +0000 Subject: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA In-Reply-To: <9700A18779F35F49AF027300A49E7C76608ED666@SHSMSX105.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C76608ED666@SHSMSX105.ccr.corp.intel.com> Message-ID: Hi Shuicheng, All the services on the Active Controller are "enabled-active" and I can see some services "enabled-standby" on the standby controller. Is there any way we can set all the Openstack Services "enabled-active" on both the Active and Standby Node? Regards Anirudh Gupta From: Lin, Shuicheng Sent: 01 December 2019 12:42 To: Anirudh Gupta ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Anirudh, Per my understanding, for duplex, there is always 1 active node and 1 standby node. The "active/active" or "active/standby" in the document is for "services", not for node. If you try to run "sudo sm-dump" in the standby node, you will find some services are "active", while some services are "standby". For the 2nd question, VMs are running in compute node. And for duplex, both controller nodes are compute nodes also. And the "Active/Standby" is for controller, not for compute function, that is why VMs will run in both node. Best Regards Shuicheng From: Anirudh Gupta > Sent: Thursday, November 28, 2019 11:45 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, Can someone please give me any update on my Query: I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html * Can someone please suggest the steps to configure Active-Active state? * I have 2 kubernetes pods corresponding to each Openstack Service in Duplex Setup and when I spawn any VM, it goes on any of the two controllers. So, is this the standard implementation in StarlingX? What needs to be done in Active-Standby configuration? * And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta From: Anirudh Gupta Sent: 26 November 2019 10:30 To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html Can someone please suggest the steps to configure Active-Active state? And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Tue Dec 3 11:19:20 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Tue, 3 Dec 2019 11:19:20 +0000 Subject: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA In-Reply-To: <9700A18779F35F49AF027300A49E7C76608F5759@SHSMSX105.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C76608ED666@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F5759@SHSMSX105.ccr.corp.intel.com> Message-ID: Hi Shuicheng We specifically wanted to understand whether "Openstack Services" like Nova, Neutron etc are working in Active/Active or Active/StandBy. In the "sm-dump", there are "vim-services" mentioned, but there is no information about the state of individual "Neutron" services and all other. controller-0:/home/sysadmin# sm-dump -Service_Groups------------------------------------------------------------------------ oam-services standby standby controller-services standby standby cloud-services standby standby patching-services standby standby directory-services active active web-services active active storage-services active active storage-monitoring-services standby standby vim-services standby standby --------------------------------------------------------------------------------------- -Services------------------------------------------------------------------------------ oam-ip enabled-standby disabled management-ip enabled-standby disabled drbd-pg enabled-standby enabled-standby drbd-rabbit enabled-standby enabled-standby drbd-cgcs enabled-standby enabled-standby drbd-platform enabled-standby enabled-standby pg-fs enabled-standby disabled rabbit-fs enabled-standby disabled nfs-mgmt enabled-standby disabled cgcs-fs enabled-standby disabled platform-fs enabled-standby disabled postgres enabled-standby disabled rabbit enabled-standby disabled cgcs-export-fs enabled-standby disabled platform-export-fs enabled-standby disabled cgcs-nfs-ip enabled-standby disabled platform-nfs-ip enabled-standby disabled sysinv-inv enabled-standby disabled sysinv-conductor enabled-standby disabled mtc-agent enabled-standby disabled hw-mon enabled-standby disabled dnsmasq enabled-standby disabled fm-mgr enabled-standby disabled keystone enabled-standby disabled open-ldap enabled-active enabled-active snmp enabled-standby disabled lighttpd enabled-active enabled-active horizon enabled-active enabled-active patch-alarm-manager enabled-standby disabled mgr-restful-plugin enabled-active enabled-active ceph-manager enabled-standby disabled vim enabled-standby disabled vim-api enabled-standby disabled vim-webserver enabled-standby disabled guest-agent enabled-standby disabled haproxy enabled-standby disabled pxeboot-ip enabled-standby disabled drbd-extension enabled-standby enabled-standby extension-fs enabled-standby disabled extension-export-fs enabled-standby disabled etcd enabled-standby disabled drbd-etcd enabled-standby enabled-standby etcd-fs enabled-standby disabled barbican-api enabled-standby disabled barbican-keystone-listener enabled-standby disabled barbican-worker enabled-standby disabled cluster-host-ip enabled-standby disabled docker-distribution enabled-standby disabled dockerdistribution-fs enabled-standby disabled drbd-dockerdistribution enabled-standby enabled-standby ceph-mon enabled-standby disabled cephmon-fs enabled-standby disabled drbd-cephmon enabled-standby enabled-standby ceph-osd enabled-active enabled-active helmrepository-fs enabled-standby disabled registry-token-server enabled-standby disabled dbmon enabled-standby enabled-standby ------------------------------------------------------------------------------ In order to verify it, we captured "neutron-server" logs of both the StarlingX servers. Case 1: Horizon is accessed using Floating Ip and One VM is spawned It goes on Active Controller and "neutron-server" logs of Active Controller shows the creation of network port and assigning port to the VM. Case 2: Horizon is accessed using Floating Ip and Two VM's are spawned simultaneously In this case, one VM gets spawned on Active Controller and Other on Standby Controller. But "neutron-server" logs of both the controller shows the creation of network port and assigning port to the VM. Inference: As per our understanding, if all the "Openstack Services" were configured in Active/Standby, then request would have come to only Active Controller. But, in this case both the Controllers are serving the request which implies that all the Openstack Services are configured in "Active/Active" State Is our understanding correct that the StarlingX Nodes are in Active-Standby configuration, but all the Openstack Services are configured in Active-Active State? Regards Anirudh Gupta From: Lin, Shuicheng Sent: 03 December 2019 05:57 To: Anirudh Gupta ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Anirudh, What is the purpose or benefit you want to have 2 active controller? I think it is reasonable there is 1 active controller only, and another controller is standby for backup, this is the meaning of HA. Two active controller at the same time will cause brain split. What do you mean "Openstack Services"? Openstack is containerized now, and run in both nodes by K8S. Best Regards Shuicheng From: Anirudh Gupta > Sent: Monday, December 2, 2019 6:59 PM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Shuicheng, All the services on the Active Controller are "enabled-active" and I can see some services "enabled-standby" on the standby controller. Is there any way we can set all the Openstack Services "enabled-active" on both the Active and Standby Node? Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 01 December 2019 12:42 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Anirudh, Per my understanding, for duplex, there is always 1 active node and 1 standby node. The "active/active" or "active/standby" in the document is for "services", not for node. If you try to run "sudo sm-dump" in the standby node, you will find some services are "active", while some services are "standby". For the 2nd question, VMs are running in compute node. And for duplex, both controller nodes are compute nodes also. And the "Active/Standby" is for controller, not for compute function, that is why VMs will run in both node. Best Regards Shuicheng From: Anirudh Gupta > Sent: Thursday, November 28, 2019 11:45 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, Can someone please give me any update on my Query: I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html * Can someone please suggest the steps to configure Active-Active state? * I have 2 kubernetes pods corresponding to each Openstack Service in Duplex Setup and when I spawn any VM, it goes on any of the two controllers. So, is this the standard implementation in StarlingX? What needs to be done in Active-Standby configuration? * And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta From: Anirudh Gupta Sent: 26 November 2019 10:30 To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html Can someone please suggest the steps to configure Active-Active state? And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Qian at windriver.com Tue Dec 3 18:38:11 2019 From: Bin.Qian at windriver.com (Qian, Bin) Date: Tue, 3 Dec 2019 18:38:11 +0000 Subject: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA In-Reply-To: References: <9700A18779F35F49AF027300A49E7C76608ED666@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F5759@SHSMSX105.ccr.corp.intel.com>, Message-ID: Anirudh, Each service has its own redundancy mode, that's why some services run Active/Active, some others run Active/Standby. It is not the matter if you can configure a service to be in a particular redundancy mode, it is whether the service itself can run in such mode. Neutron and other openstack services had been removed from bare metal. Such services are no long managed by service manager. Regards, Bin ________________________________ From: Anirudh Gupta [Anirudh.Gupta at hsc.com] Sent: Tuesday, December 03, 2019 3:19 AM To: Lin, Shuicheng; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Shuicheng We specifically wanted to understand whether “Openstack Services” like Nova, Neutron etc are working in Active/Active or Active/StandBy. In the “sm-dump”, there are “vim-services” mentioned, but there is no information about the state of individual “Neutron“ services and all other. controller-0:/home/sysadmin# sm-dump -Service_Groups------------------------------------------------------------------------ oam-services standby standby controller-services standby standby cloud-services standby standby patching-services standby standby directory-services active active web-services active active storage-services active active storage-monitoring-services standby standby vim-services standby standby --------------------------------------------------------------------------------------- -Services------------------------------------------------------------------------------ oam-ip enabled-standby disabled management-ip enabled-standby disabled drbd-pg enabled-standby enabled-standby drbd-rabbit enabled-standby enabled-standby drbd-cgcs enabled-standby enabled-standby drbd-platform enabled-standby enabled-standby pg-fs enabled-standby disabled rabbit-fs enabled-standby disabled nfs-mgmt enabled-standby disabled cgcs-fs enabled-standby disabled platform-fs enabled-standby disabled postgres enabled-standby disabled rabbit enabled-standby disabled cgcs-export-fs enabled-standby disabled platform-export-fs enabled-standby disabled cgcs-nfs-ip enabled-standby disabled platform-nfs-ip enabled-standby disabled sysinv-inv enabled-standby disabled sysinv-conductor enabled-standby disabled mtc-agent enabled-standby disabled hw-mon enabled-standby disabled dnsmasq enabled-standby disabled fm-mgr enabled-standby disabled keystone enabled-standby disabled open-ldap enabled-active enabled-active snmp enabled-standby disabled lighttpd enabled-active enabled-active horizon enabled-active enabled-active patch-alarm-manager enabled-standby disabled mgr-restful-plugin enabled-active enabled-active ceph-manager enabled-standby disabled vim enabled-standby disabled vim-api enabled-standby disabled vim-webserver enabled-standby disabled guest-agent enabled-standby disabled haproxy enabled-standby disabled pxeboot-ip enabled-standby disabled drbd-extension enabled-standby enabled-standby extension-fs enabled-standby disabled extension-export-fs enabled-standby disabled etcd enabled-standby disabled drbd-etcd enabled-standby enabled-standby etcd-fs enabled-standby disabled barbican-api enabled-standby disabled barbican-keystone-listener enabled-standby disabled barbican-worker enabled-standby disabled cluster-host-ip enabled-standby disabled docker-distribution enabled-standby disabled dockerdistribution-fs enabled-standby disabled drbd-dockerdistribution enabled-standby enabled-standby ceph-mon enabled-standby disabled cephmon-fs enabled-standby disabled drbd-cephmon enabled-standby enabled-standby ceph-osd enabled-active enabled-active helmrepository-fs enabled-standby disabled registry-token-server enabled-standby disabled dbmon enabled-standby enabled-standby ------------------------------------------------------------------------------ In order to verify it, we captured “neutron-server” logs of both the StarlingX servers. Case 1: Horizon is accessed using Floating Ip and One VM is spawned It goes on Active Controller and “neutron-server” logs of Active Controller shows the creation of network port and assigning port to the VM. Case 2: Horizon is accessed using Floating Ip and Two VM’s are spawned simultaneously In this case, one VM gets spawned on Active Controller and Other on Standby Controller. But “neutron-server” logs of both the controller shows the creation of network port and assigning port to the VM. Inference: As per our understanding, if all the “Openstack Services” were configured in Active/Standby, then request would have come to only Active Controller. But, in this case both the Controllers are serving the request which implies that all the Openstack Services are configured in “Active/Active” State Is our understanding correct that the StarlingX Nodes are in Active-Standby configuration, but all the Openstack Services are configured in Active-Active State? Regards Anirudh Gupta From: Lin, Shuicheng Sent: 03 December 2019 05:57 To: Anirudh Gupta ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Anirudh, What is the purpose or benefit you want to have 2 active controller? I think it is reasonable there is 1 active controller only, and another controller is standby for backup, this is the meaning of HA. Two active controller at the same time will cause brain split. What do you mean “Openstack Services”? Openstack is containerized now, and run in both nodes by K8S. Best Regards Shuicheng From: Anirudh Gupta > Sent: Monday, December 2, 2019 6:59 PM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Shuicheng, All the services on the Active Controller are “enabled-active” and I can see some services “enabled-standby” on the standby controller. Is there any way we can set all the Openstack Services “enabled-active” on both the Active and Standby Node? Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 01 December 2019 12:42 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Anirudh, Per my understanding, for duplex, there is always 1 active node and 1 standby node. The “active/active” or “active/standby” in the document is for “services”, not for node. If you try to run “sudo sm-dump” in the standby node, you will find some services are “active”, while some services are “standby”. For the 2nd question, VMs are running in compute node. And for duplex, both controller nodes are compute nodes also. And the “Active/Standby” is for controller, not for compute function, that is why VMs will run in both node. Best Regards Shuicheng From: Anirudh Gupta > Sent: Thursday, November 28, 2019 11:45 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, Can someone please give me any update on my Query: I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either “active/active” or “active/standby” https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html * Can someone please suggest the steps to configure Active-Active state? * I have 2 kubernetes pods corresponding to each Openstack Service in Duplex Setup and when I spawn any VM, it goes on any of the two controllers. So, is this the standard implementation in StarlingX? What needs to be done in Active-Standby configuration? * And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta From: Anirudh Gupta Sent: 26 November 2019 10:30 To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either “active/active” or “active/standby” https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html Can someone please suggest the steps to configure Active-Active state? And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Tue Dec 3 20:00:49 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Tue, 3 Dec 2019 20:00:49 +0000 Subject: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC27DECC5@ALA-MBD.corp.ad.wrs.com> Folks, The openstack services (nova, neutron etc.) are containerized with the life cycle managed by kubernetes. The openstack services run active-active. Brent From: Qian, Bin [mailto:Bin.Qian at windriver.com] Sent: Tuesday, December 3, 2019 1:38 PM To: Anirudh Gupta ; Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Anirudh, Each service has its own redundancy mode, that's why some services run Active/Active, some others run Active/Standby. It is not the matter if you can configure a service to be in a particular redundancy mode, it is whether the service itself can run in such mode. Neutron and other openstack services had been removed from bare metal. Such services are no long managed by service manager. Regards, Bin ________________________________ From: Anirudh Gupta [Anirudh.Gupta at hsc.com] Sent: Tuesday, December 03, 2019 3:19 AM To: Lin, Shuicheng; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Shuicheng We specifically wanted to understand whether "Openstack Services" like Nova, Neutron etc are working in Active/Active or Active/StandBy. In the "sm-dump", there are "vim-services" mentioned, but there is no information about the state of individual "Neutron" services and all other. controller-0:/home/sysadmin# sm-dump -Service_Groups------------------------------------------------------------------------ oam-services standby standby controller-services standby standby cloud-services standby standby patching-services standby standby directory-services active active web-services active active storage-services active active storage-monitoring-services standby standby vim-services standby standby --------------------------------------------------------------------------------------- -Services------------------------------------------------------------------------------ oam-ip enabled-standby disabled management-ip enabled-standby disabled drbd-pg enabled-standby enabled-standby drbd-rabbit enabled-standby enabled-standby drbd-cgcs enabled-standby enabled-standby drbd-platform enabled-standby enabled-standby pg-fs enabled-standby disabled rabbit-fs enabled-standby disabled nfs-mgmt enabled-standby disabled cgcs-fs enabled-standby disabled platform-fs enabled-standby disabled postgres enabled-standby disabled rabbit enabled-standby disabled cgcs-export-fs enabled-standby disabled platform-export-fs enabled-standby disabled cgcs-nfs-ip enabled-standby disabled platform-nfs-ip enabled-standby disabled sysinv-inv enabled-standby disabled sysinv-conductor enabled-standby disabled mtc-agent enabled-standby disabled hw-mon enabled-standby disabled dnsmasq enabled-standby disabled fm-mgr enabled-standby disabled keystone enabled-standby disabled open-ldap enabled-active enabled-active snmp enabled-standby disabled lighttpd enabled-active enabled-active horizon enabled-active enabled-active patch-alarm-manager enabled-standby disabled mgr-restful-plugin enabled-active enabled-active ceph-manager enabled-standby disabled vim enabled-standby disabled vim-api enabled-standby disabled vim-webserver enabled-standby disabled guest-agent enabled-standby disabled haproxy enabled-standby disabled pxeboot-ip enabled-standby disabled drbd-extension enabled-standby enabled-standby extension-fs enabled-standby disabled extension-export-fs enabled-standby disabled etcd enabled-standby disabled drbd-etcd enabled-standby enabled-standby etcd-fs enabled-standby disabled barbican-api enabled-standby disabled barbican-keystone-listener enabled-standby disabled barbican-worker enabled-standby disabled cluster-host-ip enabled-standby disabled docker-distribution enabled-standby disabled dockerdistribution-fs enabled-standby disabled drbd-dockerdistribution enabled-standby enabled-standby ceph-mon enabled-standby disabled cephmon-fs enabled-standby disabled drbd-cephmon enabled-standby enabled-standby ceph-osd enabled-active enabled-active helmrepository-fs enabled-standby disabled registry-token-server enabled-standby disabled dbmon enabled-standby enabled-standby ------------------------------------------------------------------------------ In order to verify it, we captured "neutron-server" logs of both the StarlingX servers. Case 1: Horizon is accessed using Floating Ip and One VM is spawned It goes on Active Controller and "neutron-server" logs of Active Controller shows the creation of network port and assigning port to the VM. Case 2: Horizon is accessed using Floating Ip and Two VM's are spawned simultaneously In this case, one VM gets spawned on Active Controller and Other on Standby Controller. But "neutron-server" logs of both the controller shows the creation of network port and assigning port to the VM. Inference: As per our understanding, if all the "Openstack Services" were configured in Active/Standby, then request would have come to only Active Controller. But, in this case both the Controllers are serving the request which implies that all the Openstack Services are configured in "Active/Active" State Is our understanding correct that the StarlingX Nodes are in Active-Standby configuration, but all the Openstack Services are configured in Active-Active State? Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 03 December 2019 05:57 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Anirudh, What is the purpose or benefit you want to have 2 active controller? I think it is reasonable there is 1 active controller only, and another controller is standby for backup, this is the meaning of HA. Two active controller at the same time will cause brain split. What do you mean "Openstack Services"? Openstack is containerized now, and run in both nodes by K8S. Best Regards Shuicheng From: Anirudh Gupta > Sent: Monday, December 2, 2019 6:59 PM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Shuicheng, All the services on the Active Controller are "enabled-active" and I can see some services "enabled-standby" on the standby controller. Is there any way we can set all the Openstack Services "enabled-active" on both the Active and Standby Node? Regards Anirudh Gupta From: Lin, Shuicheng > Sent: 01 December 2019 12:42 To: Anirudh Gupta >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Anirudh, Per my understanding, for duplex, there is always 1 active node and 1 standby node. The "active/active" or "active/standby" in the document is for "services", not for node. If you try to run "sudo sm-dump" in the standby node, you will find some services are "active", while some services are "standby". For the 2nd question, VMs are running in compute node. And for duplex, both controller nodes are compute nodes also. And the "Active/Standby" is for controller, not for compute function, that is why VMs will run in both node. Best Regards Shuicheng From: Anirudh Gupta > Sent: Thursday, November 28, 2019 11:45 AM To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, Can someone please give me any update on my Query: I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html * Can someone please suggest the steps to configure Active-Active state? * I have 2 kubernetes pods corresponding to each Openstack Service in Duplex Setup and when I spawn any VM, it goes on any of the two controllers. So, is this the standard implementation in StarlingX? What needs to be done in Active-Standby configuration? * And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta From: Anirudh Gupta Sent: 26 November 2019 10:30 To: starlingx-discuss at lists.starlingx.io; starlingx-announce at lists.starlingx.io Subject: StarlingX 2.0 Duplex Controller in Active/Active HA Hi Team, I need to install StarlingX 2.0 Duplex Bare Metal. As per the document, there can be 2 modes in which HA controllers can be configure i.e. either "active/active" or "active/standby" https://docs.starlingx.io/deploy_install_guides/r2_release/bare_metal/aio_duplex.html Can someone please suggest the steps to configure Active-Active state? And, what difference it would have on my deployment in terms of functionality? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Dec 3 20:15:25 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 03 Dec 2019 12:15:25 -0800 Subject: [Starlingx-discuss] Thread on OpenDev Independence and Governance Message-ID: <302bb4ca-2cba-41a7-baa9-3c86cf1962c9@www.fastmail.com> Hello, I wanted everyone to be aware of this thread [0] at openstack-infra at lists.openstack.org that I've started to formalize OpenDev's future independence and governance. Currently the infra team is formally an OpenStack project, but we think that we'll be better able to serve our non OpenStack users if the OpenDev aspects of this team become independent with its own governance. We are more than happy to hear feedback so please read this thread and respond if the topic interests you. Note, I'm using this pointer thread in an effort to avoid cross posting and losing feedback. It would be great if people can respond to the original on openstack-infra at lists. [0] http://lists.openstack.org/pipermail/openstack-infra/2019-December/006537.html Thank you, Clark From bruce.e.jones at intel.com Tue Dec 3 22:09:29 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 3 Dec 2019 22:09:29 +0000 Subject: [Starlingx-discuss] Reminder: Community call tomorrow 7AM Pacific Message-ID: <9A85D2917C58154C960D95352B22818BED37D579@fmsmsx123.amr.corp.intel.com> Please join our weekly community call tomorrow. The agenda is on this etherpad [1]. Please feel free to add items to the agenda. Thank you! Brucej [1] https://etherpad.openstack.org/p/stx-status -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Dec 3 23:43:47 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 3 Dec 2019 23:43:47 +0000 Subject: [Starlingx-discuss] [ Final Regression Regression testing - stx3.0 ] Report for 12/03/2019 Message-ID: Today's final regression report is pending because we didn't have release candidate iso, We will have results for the next Thursday. Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Wed Dec 4 00:04:57 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Wed, 4 Dec 2019 00:04:57 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191203 Message-ID: <6E48C107-D8C5-4071-82EA-D412D640BD83@intel.com> Status: GREEN (Baremetal Only) Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-03 (link) =========================================== Sanity Test is executed in Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From bin.yang at intel.com Wed Dec 4 02:00:54 2019 From: bin.yang at intel.com (Yang, Bin) Date: Wed, 4 Dec 2019 02:00:54 +0000 Subject: [Starlingx-discuss] Use Case sharing: use StarlingX as DevOps Infra Message-ID: Hi community friends, As you know, StarlingX has good scalability, which can start from 3 nodes and scale to 100+ nodes. This is perfect for devops use case. A new project might only have a few servers at the beginning and need to scale it in the future. So far, I am using StarlingX as DevOps Infra for our project CI/CD. I'd like to share some BKMs and an example StarlingX App for such use case. Here is the WiKi link. Any comment is welcome. https://wiki.openstack.org/wiki/Use_StarlingX_as_DevOps_Infra thanks, Bin -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Wed Dec 4 02:10:22 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Wed, 4 Dec 2019 02:10:22 +0000 Subject: [Starlingx-discuss] ceph ops enabling in sysinv-conductor In-Reply-To: <82EBE26D-C9F9-4425-B5AE-9FEF1E74BC65@windriver.com> References: <56829C2A36C2E542B0CCB9854828E4D85628A993@CDSMSX102.ccr.corp.intel.com> <56829C2A36C2E542B0CCB9854828E4D85628AC31@CDSMSX102.ccr.corp.intel.com> <82EBE26D-C9F9-4425-B5AE-9FEF1E74BC65@windriver.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628AEF3@CDSMSX102.ccr.corp.intel.com> Thanks Bob I will fix this issue with these three items. * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z Cc: 'starlingx-discuss at lists.starlingx.io' ; Poncea, Ovidiu Subject: Re: ceph ops enabling in sysinv-conductor See inline… From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Philip_Wang at alphanetworks.com Wed Dec 4 02:16:01 2019 From: Philip_Wang at alphanetworks.com (Philip_Wang at alphanetworks.com) Date: Wed, 4 Dec 2019 02:16:01 +0000 Subject: [Starlingx-discuss] PTP tx timestamp seem not get from NIC Message-ID: Dear Team, In my All-In-One environment, I check /var/log/user.log find below message. 2019-12-03T16:03:30.000 controller-0 ptp4l: warning [20954.517] clockcheck: clock jumped forward or running faster than expected! 2019-12-03T16:03:30.000 controller-0 ptp4l: notice [20954.517] port 1: SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.613] port 0015b2.fffe.a92e24-1 changed state 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.613] reconfiguring after port state change 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.613] master clock not ready, waiting... 2019-12-03T16:03:30.000 controller-0 ptp4l: notice [20954.861] port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.914] port 0015b2.fffe.a92e24-1 changed state 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.914] reconfiguring after port state change 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.914] selecting CLOCK_REALTIME for synchronization 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.914] selecting enp0s25 as the master clock 2019-12-03T16:03:35.000 controller-0 phc2sys: info [20959.816] rms 8125465923430 max 70368744177703 freq -3070283 +/- 72810449 delay 1662 +/- 172 2019-12-03T16:03:43.000 controller-0 ptp4l: info [20967.182] rms 17864936252211 max 70368748587963 freq +17842027 +/- 101360440 delay 173334 +/- 1003918 2019-12-03T16:03:43.000 controller-0 ptp4l: warning [20967.688] clockcheck: clock jumped forward or running faster than expected! 2019-12-03T16:03:43.000 controller-0 ptp4l: notice [20967.688] port 1: SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20967.720] port 0015b2.fffe.a92e24-1 changed state 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20967.720] reconfiguring after port state change 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20967.720] master clock not ready, waiting... 2019-12-03T16:03:43.000 controller-0 ptp4l: notice [20968.031] port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20968.120] port 0015b2.fffe.a92e24-1 changed state 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20968.120] reconfiguring after port state change 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20968.120] selecting CLOCK_REALTIME for synchronization 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20968.120] selecting enp0s25 as the master clock This message is seem that delay request packet TX timestamp is not get from NIC(Intel I210) cause port 1 state from SLAVE to UNCALIBRATED. Has any debug method can check the TX timestamp?? Best Reagrds, - Philip Wang --- Alpha Networks Inc. TEL: 886-3-5636666 EXT:6403 This electronic mail transmission is intended only for the named recipient. It contains information which may be privileged,confidential and exempt from disclosure under applicable law. Dissemination, distribution, or copying of this communication by anyone other than the recipient or the recipient's agent is strictly prohibited. If this electronic mail transmission is received in error, Please notify us immediately and delete the message and all attachments of it from your computer system. Thank you for your cooperation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Wed Dec 4 03:33:45 2019 From: yong.hu at intel.com (Yong Hu) Date: Wed, 4 Dec 2019 11:33:45 +0800 Subject: [Starlingx-discuss] How is Barbican being used in StarlingX? Message-ID: <609ddf91-2cf0-b65a-79e2-982b889d15eb@intel.com> Hi folks, I am working on a LP and might like to use "barbican-keystone-listener" to monitor a keystone event. From "sm-dump", we are seeing "barbican-api", "barbican-keystone-listener" and "barbican-worker" are "enabled-active", but I have a few questions about how Barbican (on host) is being used in StarlingX: 1. what secrets are stored in Barbican currently? btw: I got nothing by running "openstack secret list" or "barbican secret list". As well, "admin" and its password are stored in Keyring, not in Barbican. 2. Does "barbican-keystone-listener" actually work with keystone in StarlingX? I enabled "debug" flag in /etc/barbican/barbican.conf, but when I was triggering events with keystone, it seemed there was no notifications captured by "barbican-keystone-listener". 3. there is a "barbican" user/identify (with initial password) managed by keystone, but I wonder what we used this "barbican" for StarlingX. thanks in advance! -Yong From cristopher.j.lemus.contreras at intel.com Wed Dec 4 09:10:44 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Wed, 04 Dec 2019 03:10:44 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <3cea8d$77gi28@fmsmga002.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191204T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From parkeryan at tencent.com Wed Dec 4 09:16:05 2019 From: parkeryan at tencent.com (=?utf-8?B?cGFya2VyeWFuKOmXq+W/l+adsCk=?=) Date: Wed, 4 Dec 2019 09:16:05 +0000 Subject: [Starlingx-discuss] How to modify management subnet_pool? Message-ID: <386BC956-9B56-44B5-9906-33AD1322F463@tencent.com> Hi, I have a problem while deploying StarlingX 2.0, and I found the management subnet-pool was not enough for context. Here is the log. [sysadmin at controller-1 ~(keystone_admin)]$ system host-update 12 personality=worker hostname=compute-7 Remote error: AddressPoolExhausted Address pool management has no available addresses [u'Traceback (most recent call last):\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 438, in _process_data\n **args)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1643, in configure_ihost\n self._configure_worker_host(context, host)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1505, in _configure_worker_host\n self._allocate_addresses_for_host(context, host)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1100, in _allocate_addresses_for_host\n address_name).address\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1055, in _allocate_pool_address\n interface_id, pool_uuid, address_name, dbapi=self.dbapi\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/api/controllers/v1/address_pool.py", line 416, in assign_address\n ip_address = cls.allocate_address(pool, dbapi)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/api/controllers/v1/address_pool.py", line 399, in allocate_address\n raise exception.AddressPoolExhausted(name=pool.name)\n', u'AddressPoolExhausted: Address pool management has no available addresses\n']. And I have to extend the management address pool. [sysadmin at controller-0 sow(keystone_admin)]$ system addrpool-show d6ed7b27-4037-42db-97c7-676256b1c883 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | uuid | d6ed7b27-4037-42db-97c7-676256b1c883 | | name | management | | network | 192.168.204.0 | | prefix | 28 | | order | random | | ranges | ['192.168.204.2-192.168.204.14'] | | floating_address | 192.168.204.2 | | controller0_address | 192.168.204.3 | | controller1_address | 192.168.204.4 | | gateway_address | None | +---------------------+--------------------------------------+ I am trying to extend the ranges by ‘system addrpool-modify’, but it reminds the prefix can only be modified during bootstrap phase, [sysadmin at controller-0 sow(keystone_admin)]$ system help addrpool-modify usage: system addrpool-modify [--name ] [--ranges ] [--order ] [--prefix ] Modify interface attributes. Positional arguments: UUID of IP address pool entry Optional arguments: --name Name of the Address Pool] --ranges The inclusive range of addresses to allocate [,,...] --order The allocation order within the start/end range --prefix CIDR prefix, only modifiable during bootstrap phase. Anybody can tell me how to make the controller node and the compute node into bootstrap phase or I should redeploy the whole context from start, thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcelx.schaible at intel.com Wed Dec 4 15:19:42 2019 From: marcelx.schaible at intel.com (Schaible, MarcelX) Date: Wed, 4 Dec 2019 15:19:42 +0000 Subject: [Starlingx-discuss] StarlingX 2.0: ansible-playbook fails in task "Initializing Kubernetes master" Message-ID: Hi, we started a new installation of StarlingX 2.0, DUPLEX and having some problems with the TASK [bringup-essential-services : Initializing Kubernetes master] (see below) Network configuration seems to work fine: wget https://k8s.gcr.io --2019-12-04 15:16:40-- https://k8s.gcr.io/ Resolving proxy-mu.intel.com (proxy-mu.intel.com)... 10.217.247.236 Connecting to proxy-mu.intel.com (proxy-mu.intel.com)|10.217.247.236|:911... connected. Proxy request sent, awaiting response... 302 Found Location: https://cloud.google.com/container-registry/ [following] --2019-12-04 15:16:40-- https://cloud.google.com/container-registry/ Connecting to proxy-mu.intel.com (proxy-mu.intel.com)|10.217.247.236|:911... connected. Proxy request sent, awaiting response... 200 OK Length: 311325 (304K) [text/html] Saving to: 'index.html.2' 100%[==================================================================================================================================================>] 311,325 1.15MB/s in 0.3s 2019-12-04 15:16:41 (1.15 MB/s) - 'index.html.2' saved [311325/311325] Any idea? Thanks Marcel TASK [bringup-essential-services : Update Kube admin yaml with OpenID Connect info] ******************************************************************************************************** TASK [bringup-essential-services : Delete Kube admin yaml OpenID Connect entries if required config parameters are not present] ************************************************************ changed: [localhost] => (item=sed -i -e '/<%= @apiserver_oidc_client_id %>/d' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i -e '/<%= @apiserver_oidc_issuer_url %>/d' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i -e '/<%= @apiserver_oidc_username_claim %>/d' /etc/kubernetes/kubeadm.yaml) TASK [bringup-essential-services : Initializing Kubernetes master] ************************************************************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubeadm", "init", "--config=/etc/kubernetes/kubeadm.yaml"], "delta": "0:01:32.357744", "end": "2019-12-04 15:11:42.604982", "msg": "non-zero return code", "rc": 1, "start": "2019-12-04 15:10:10.247238", "stderr": "\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.6. Latest validated version: 18.06\nerror execution phase preflight: [preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`", "stderr_lines": ["\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.6. Latest validated version: 18.06", "error execution phase preflight: [preflight] Some fatal errors occurred:", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`"], "stdout": "[init] Using Kubernetes version: v1.13.5\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "stdout_lines": ["[init] Using Kubernetes version: v1.13.5", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"]} Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Qian at windriver.com Wed Dec 4 15:26:01 2019 From: Bin.Qian at windriver.com (Qian, Bin) Date: Wed, 4 Dec 2019 15:26:01 +0000 Subject: [Starlingx-discuss] [docs] provisioning board management control Message-ID: With the completion of launch pad 1852328 [0], two change lists are under reviewed ([1][2]). After both change lists are merged, the way of provisioning board management control will be changed and the board management control settings will be configured as per-host basis. The new board management types are: none -- board management control is not provisioned redfish ipmi dynamic -- will try redfish then ipmi There are 2 ways to provision board management control 1. when adding a node, set bm_type, bm_ip, bm_username and bm_password, with the system command below: system host-add -n -p -m -I -U -P -T board management type is optional with default is 'dynamic' when board management IP and username are both provided. 2. after a node is added, set bm_type, bm_ip, bm_username and bm_password, with the system command below: system host-update bm_ip= bm_username= bm_password= bm_type= The system command below can be used to deprovision board management control to a host: system host-update bm_type=none The corresponding horizon GUI is also changed as attached screenshot (bmc-horizon.JPG). Additionally, bmc_access_method is no longer a valid service parameter. The system service-parameter commands to access such service parameter are invalid: system service-parameter-modify platform maintenance bmc_access_method=<...> system service-parameter-add platform maintenance bmc_access_method=<...> [0] https://bugs.launchpad.net/starlingx/+bug/1852328 [1] https://review.opendev.org/#/c/697177 [2] https://review.opendev.org/#/c/697175 Regards, Bin -------------- next part -------------- A non-text attachment was scrubbed... Name: bmc-horizon.JPG Type: image/jpeg Size: 57570 bytes Desc: bmc-horizon.JPG URL: From austin.sun at intel.com Wed Dec 4 15:28:16 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 4 Dec 2019 15:28:16 +0000 Subject: [Starlingx-discuss] MoM : Weekly StarlingX non-OpenStack distro meeting, 12/04/2019 Message-ID: Hi All: Thanks join the meeting , The MoM of 12/04/2019: - Ceph containerization update (Tingjie/martin) Tingjie Presents a flow chart how to integrate python-rookclient , TingJie will send to maillist. - Standardize Flock Package Versioning (Yang Bin) JITStack warm-up workshop. JITStack Team is ready to start this task. - Kata Container (Shuicheng) B&R is successfully w/ containerd . Continue to check docker registry post method for token fetch. --- enhancement , not gating. - CentOS 8.0 upgrade planning (Shuai Zhao) SRPM, 3 SRPM have issue when building. 19 are related w/ openstack. tarball : 26 can be compiled. as much as using current version for user space tarball. for kernel , using latest version. Container Build : building iso meet issue. might to change build-tools scripts ? Commit message and patch quality is improved , especially for kernel patches zhiguo made. * stx 3.0 bugs fix - CVE issue tracking (Shuicheng) * OVMF . patch is ready for merge * kernel change upgrade 1062 rt test has some regression due to spectre patch included in 1062. is spectre patch impacting rt performance , Robin/Shuicheng will find it is reasonable. - Storage issue tracking (Tingjie) 4-Medium https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.storage LP#1826886 cinder cmd not working intermittently ----Ma zheng to confirm if this is different issue and is valid LP#1844164 alarm 800.001 raised on lock storage-0 and not cleared when storage-0 unlocks --- Martin LP#1847336 IPv6 Distributed Cloud: ansible-playbook 'Wipe ceph osds' does not support re-play / re-entrance ---- Ovidiu LP#1848198 Glance backend present on non-openstack deployment ---- Stefan Dinescu - Others issue tracking (Austin) https://bugs.launchpad.net/starlingx/+bug/1847335 , need to find BIOS info the issue reported. * Opens (All) Thanks. BR Austin Sun. From bruce.e.jones at intel.com Wed Dec 4 15:34:31 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 4 Dec 2019 15:34:31 +0000 Subject: [Starlingx-discuss] Community call minutes Dec 4 2019 Message-ID: <9A85D2917C58154C960D95352B22818BED37DC39@fmsmsx123.amr.corp.intel.com> * Standing Topics ? Gerrit Reviews in Need of Attention o Kernell update reviews for CVE fixes - the kernel update causes a latency test regression. Debug in progress. Mediation for side channel issues suspected. Robin has found a suspected change in the kernel but more work is needed to verify. ? Patches: CVE patches: https://review.opendev.org/#/q/owner:bin1.lu%2540intel.com+status:open o CentOS-8 reviews in progress - help requested: https://review.opendev.org/#/q/topic:centos8+(status:open)+AND+projects:starlingx/ ? Sanity: any RED since last week? o Passing on the r/stx.3.0 branch, master looks green after the fixes from yesterday. ? Unanswered Requests for Help on Mailing List o Many threads in progress, please join in the helping process if you can o Kris: We have two standing requests for help with docs - certificate setup and system config. ? Certificate Configuration & Management -- https://storyboard.openstack.org/#!/story/2006866 ? System Config Guide -- https://storyboard.openstack.org/#!/story/2006862 ? DNS Servers, task 37502 ? OAM Firewall, task 37504 o PSA - if anyone has content for the docs needed for R3, please reach out to Kris or join the Docs call ? https://etherpad.openstack.org/p/stx-r3-target-content * This Week's Topics ? Release 2.0.2 bug status: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.2.0 ? Release 3.0 bug status :https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.3.0 ? Release 3.0 testing status: * We have a green Sanity on the latest Cengn ISO. Starting regression testing ? Testing improvements - Bruce ? Slack https://starlingx.slack.com/ - Bruce * We had previous objections from Ilidoko (who is not here today) * We should use Slack or IRC but not both. Currently there are a few of us on IRC, and there seems to be a reasonable level of activity. * Slack is not open source and invites are required * We will continue to use IRC (and wechat). ? The Intel Shanghai team is using StarlingX to host their internal CI/CD activity. 10+ servers are in the cluster created by Bin Yang. See his email on the list today. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerry.Sun at windriver.com Wed Dec 4 17:07:23 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Wed, 4 Dec 2019 17:07:23 +0000 Subject: [Starlingx-discuss] How is Barbican being used in StarlingX? In-Reply-To: <609ddf91-2cf0-b65a-79e2-982b889d15eb@intel.com> References: <609ddf91-2cf0-b65a-79e2-982b889d15eb@intel.com> Message-ID: Hi Yong, I am not sure what the entire list of secrets stored in barbican is, but I know if you specify credentials for external docker registry to bootstrap the system with, barbican secrets will be created for those. Thanks, Jerry -----Original Message----- From: Yong Hu Sent: Tuesday, December 3, 2019 10:34 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How is Barbican being used in StarlingX? Hi folks, I am working on a LP and might like to use "barbican-keystone-listener" to monitor a keystone event. From "sm-dump", we are seeing "barbican-api", "barbican-keystone-listener" and "barbican-worker" are "enabled-active", but I have a few questions about how Barbican (on host) is being used in StarlingX: 1. what secrets are stored in Barbican currently? btw: I got nothing by running "openstack secret list" or "barbican secret list". As well, "admin" and its password are stored in Keyring, not in Barbican. 2. Does "barbican-keystone-listener" actually work with keystone in StarlingX? I enabled "debug" flag in /etc/barbican/barbican.conf, but when I was triggering events with keystone, it seemed there was no notifications captured by "barbican-keystone-listener". 3. there is a "barbican" user/identify (with initial password) managed by keystone, but I wonder what we used this "barbican" for StarlingX. thanks in advance! -Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Alex.Kozyrev at windriver.com Wed Dec 4 21:30:49 2019 From: Alex.Kozyrev at windriver.com (Kozyrev, Alexander (Alex)) Date: Wed, 4 Dec 2019 21:30:49 +0000 Subject: [Starlingx-discuss] How is Barbican being used in StarlingX? In-Reply-To: References: <609ddf91-2cf0-b65a-79e2-982b889d15eb@intel.com> Message-ID: Hi Yong, 1. We store BMC passwords in Barbican as well as Docker Registry credentials currently. 2. Keystone listener processes only Keystone project delete events to purge all associated Barbican resources. 3. Barbican identity in Keystone is used for authentication and multi-tenant authorization of Barbican. Regards, Alex -----Original Message----- From: Sun, Yicheng (Jerry) [mailto:Jerry.Sun at windriver.com] Sent: December 4, 2019 12:07 To: Yong Hu ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How is Barbican being used in StarlingX? Hi Yong, I am not sure what the entire list of secrets stored in barbican is, but I know if you specify credentials for external docker registry to bootstrap the system with, barbican secrets will be created for those. Thanks, Jerry -----Original Message----- From: Yong Hu Sent: Tuesday, December 3, 2019 10:34 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How is Barbican being used in StarlingX? Hi folks, I am working on a LP and might like to use "barbican-keystone-listener" to monitor a keystone event. From "sm-dump", we are seeing "barbican-api", "barbican-keystone-listener" and "barbican-worker" are "enabled-active", but I have a few questions about how Barbican (on host) is being used in StarlingX: 1. what secrets are stored in Barbican currently? btw: I got nothing by running "openstack secret list" or "barbican secret list". As well, "admin" and its password are stored in Keyring, not in Barbican. 2. Does "barbican-keystone-listener" actually work with keystone in StarlingX? I enabled "debug" flag in /etc/barbican/barbican.conf, but when I was triggering events with keystone, it seemed there was no notifications captured by "barbican-keystone-listener". 3. there is a "barbican" user/identify (with initial password) managed by keystone, but I wonder what we used this "barbican" for StarlingX. thanks in advance! -Yong _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Wed Dec 4 22:07:23 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 4 Dec 2019 22:07:23 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting In-Reply-To: <9700A18779F35F49AF027300A49E7C76608F54CC@SHSMSX105.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C76608E9D6B@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F54CC@SHSMSX105.ccr.corp.intel.com> Message-ID: Shuicheng: Thanks for the updates. We would like to take out your changes for KATA containers for a test. Can you rebase your commits and let me know if these are all of the commits: https://review.opendev.org/#/q/topic:kata+(status:open) Once you have rebased we'll create a designer build and run a few tests and let you know if we find anything that needs to be addressed. Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, December 01, 2019 9:15 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I try to run busybox with kata containers by k8s, and it could run successfully in IPv6 environment. Best Regards Shuicheng From: Miller, Frank > Sent: Saturday, November 30, 2019 4:03 AM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the update. It looks like stx-openstack has not yet been tested with IPv6. But we have been testing IPv6 with kubernetes platform only and simple k8s apps. Can you confirm kata containers is working with IPv6 when stx-openstack is not applied/not used? Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, November 29, 2019 12:48 AM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I created below LP for the IPv6 deployment issue I meet. Could you help check whether IPv6 deployment is verfied before and share me the BKM for it if there is? Thanks. https://bugs.launchpad.net/starlingx/+bug/1854316 Best Regards Shuicheng From: Miller, Frank > Sent: Tuesday, November 26, 2019 11:37 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting Abbreviated minutes: Next meeting: Tuesday Dec 10 Minutes: 1. Stx.3.0 gating LPs: * Plan for the current 18 gating LPs: * 4 LPs are expected to land for stx.3.0 including the 2 Highs * 2 LPs to be marked invalid/not reproducible * 11 LPs to be re-gated to stx.4.0 * 1 LP TBD (Erich Cordoba to update 1824881) 2. Stx.4.0 features: In features: * 2006145: Kata container support [Shuicheng Lin] --> resourced and In for stx.4.0 * 2006537: Decouple Container Applications from Platform [Bob Church] --> resourced and In for stx.4.0 * 2006770: Backup & Restore - openstack [Ovidiu Poncea] --> resourced and In for stx.4.0 * 2005312: Containerize Openstack clients --> In for now but requires plan * TBD: Upversion Kubernetes and container platform components --> haven't create SB yet but will be required during stx.4.0 NOT In features: * 2006787: Smaller memory node support [Austin Sun] --> not committed for stx.4.0 but being worked on for stx.4.0 (ie: prep) * 2004008: Fault Containerization --> not In because it requires splitting GUI plugin into 2: one with shared panels, the other with the platform panels which is not resourced Etherpad with full minutes: https://etherpad.openstack.org/p/stx-containerization Frank -----Original Appointment----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, November 25, 2019 3:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Containerization Meeting When: Tuesday, November 26, 2019 9:30 AM-10:00 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 Please join me for the bi-weekly containers meeting. Agenda for November 26 meeting: 1. stx.3.0 gating work items: 18 gating LPs (down from 26 at our last meeting) * Status update for high priority LPs (2): * https://bugs.launchpad.net/starlingx/+bug/1838659 kubernetes apiserver certificate needs rotation [Mingyuan Qi] * https://bugs.launchpad.net/starlingx/+bug/1851287 Controller failed to lock following a failover due to elastic pod failure to shutdown [Dan Voiculeasa] * Medium priority LPs (16): * Status for the 4 LPs < 50 days old: * https://bugs.launchpad.net/starlingx/+bug/1851294 [Angie Wang] * https://bugs.launchpad.net/starlingx/+bug/1850438 [Steve Webster] * https://bugs.launchpad.net/starlingx/+bug/1850189 [Stefan Dinescu] * https://bugs.launchpad.net/starlingx/+bug/1846829 [David Sullivan] * Status update for the 12 LPs that >100 days old. [Al, Angie, Bart, Erich, JimG, Ran, Shuicheng, Tao] * Can any be closed as not reproducible or won't fix? * Which ones are being actively worked on? Which ones do the owners have a plan to fix? 2. stx.4.0 planning: * 2006145: Kata container support [Shuicheng Lin] - Request update from Shuicheng if final 2 test scenarios are done (IPv6 testing + external registry with username/pwd authentication) * 2006787: Smaller memory node support, aka containerize flock services [Austin Sun] - Request feature approach & spec update * 2006537: Decouple Container Applications from Platform (stx.4.0 feature) [Bob Church] - Feature status update * Other potential stx.4.0 features --> which are resourced/have plans to address in stx.4.0? * 2006770: Backup & Restore - openstack [Ovidiu Poncea] * 2005312: Containerize Openstack clients * 2004008: Fault Containerization * TBD: Upversion Kubernetes and container platform components Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers << File: ATT00002.txt >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Dec 4 23:57:04 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 4 Dec 2019 23:57:04 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20191204 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-4 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] List of docker images : http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007175.html regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Thu Dec 5 00:53:17 2019 From: yong.hu at intel.com (Yong Hu) Date: Thu, 5 Dec 2019 08:53:17 +0800 Subject: [Starlingx-discuss] How is Barbican being used in StarlingX? In-Reply-To: References: <609ddf91-2cf0-b65a-79e2-982b889d15eb@intel.com> Message-ID: Hi Alex and Jerry, Thanks much! Your answers addressed my questions mostly. Though, I still wondered whether currently barbican-keystone-listener actually works in StarlingX, because even I turned on the debug flag, I didn't see log output from "class NotificationTask -> process_event" in "barbican/queue/keystone_listener.py", where there is supposedly some debug log about the incoming notification before handling "operation_type == 'deleted'". Have we ever tested this feature before? regards, Yong On 2019/12/5 5:30 AM, Kozyrev, Alexander (Alex) wrote: > Hi Yong, > > 1. We store BMC passwords in Barbican as well as Docker Registry credentials currently. > 2. Keystone listener processes only Keystone project delete events to purge all associated Barbican resources. > 3. Barbican identity in Keystone is used for authentication and multi-tenant authorization of Barbican. > > Regards, > Alex > > -----Original Message----- > From: Sun, Yicheng (Jerry) [mailto:Jerry.Sun at windriver.com] > Sent: December 4, 2019 12:07 > To: Yong Hu ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] How is Barbican being used in StarlingX? > > Hi Yong, > > I am not sure what the entire list of secrets stored in barbican is, but I know if you specify credentials for external docker registry to bootstrap the system with, barbican secrets will be created for those. > > Thanks, > Jerry > > -----Original Message----- > From: Yong Hu > Sent: Tuesday, December 3, 2019 10:34 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] How is Barbican being used in StarlingX? > > Hi folks, > I am working on a LP and might like to use "barbican-keystone-listener" > to monitor a keystone event. > > From "sm-dump", we are seeing "barbican-api", "barbican-keystone-listener" and "barbican-worker" are "enabled-active", but I have a few questions about how Barbican (on host) is being used in > StarlingX: > 1. what secrets are stored in Barbican currently? btw: I got nothing by running "openstack secret list" or "barbican secret list". As well, "admin" and its password are stored in Keyring, not in Barbican. > 2. Does "barbican-keystone-listener" actually work with keystone in StarlingX? I enabled "debug" flag in /etc/barbican/barbican.conf, but when I was triggering events with keystone, it seemed there was no notifications captured by "barbican-keystone-listener". > 3. there is a "barbican" user/identify (with initial password) managed by keystone, but I wonder what we used this "barbican" for StarlingX. > > > thanks in advance! > > -Yong > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From shuicheng.lin at intel.com Thu Dec 5 03:06:59 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 5 Dec 2019 03:06:59 +0000 Subject: [Starlingx-discuss] StarlingX 2.0: ansible-playbook fails in task "Initializing Kubernetes master" In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C76608F61E3@SHSMSX105.ccr.corp.intel.com> Hi Marcel: Per your wget log, there is proxy setting needed. And you don't have local private registry for the docker image. Is it correct? If so, do you set proxy in localhost.yml? It will be like below: docker_http_proxy: http://PROXY_IP:PORT docker_https_proxy: http://PROXY_IP:PORT You could use docker pull cmd to verify whether registry could be accessd or not. Best Regards Shuicheng From: Schaible, MarcelX Sent: Wednesday, December 4, 2019 11:20 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX 2.0: ansible-playbook fails in task "Initializing Kubernetes master" Hi, we started a new installation of StarlingX 2.0, DUPLEX and having some problems with the TASK [bringup-essential-services : Initializing Kubernetes master] (see below) Network configuration seems to work fine: wget https://k8s.gcr.io --2019-12-04 15:16:40-- https://k8s.gcr.io/ Resolving proxy-mu.intel.com (proxy-mu.intel.com)... 10.217.247.236 Connecting to proxy-mu.intel.com (proxy-mu.intel.com)|10.217.247.236|:911... connected. Proxy request sent, awaiting response... 302 Found Location: https://cloud.google.com/container-registry/ [following] --2019-12-04 15:16:40-- https://cloud.google.com/container-registry/ Connecting to proxy-mu.intel.com (proxy-mu.intel.com)|10.217.247.236|:911... connected. Proxy request sent, awaiting response... 200 OK Length: 311325 (304K) [text/html] Saving to: 'index.html.2' 100%[==================================================================================================================================================>] 311,325 1.15MB/s in 0.3s 2019-12-04 15:16:41 (1.15 MB/s) - 'index.html.2' saved [311325/311325] Any idea? Thanks Marcel TASK [bringup-essential-services : Update Kube admin yaml with OpenID Connect info] ******************************************************************************************************** TASK [bringup-essential-services : Delete Kube admin yaml OpenID Connect entries if required config parameters are not present] ************************************************************ changed: [localhost] => (item=sed -i -e '/<%= @apiserver_oidc_client_id %>/d' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i -e '/<%= @apiserver_oidc_issuer_url %>/d' /etc/kubernetes/kubeadm.yaml) changed: [localhost] => (item=sed -i -e '/<%= @apiserver_oidc_username_claim %>/d' /etc/kubernetes/kubeadm.yaml) TASK [bringup-essential-services : Initializing Kubernetes master] ************************************************************************************************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubeadm", "init", "--config=/etc/kubernetes/kubeadm.yaml"], "delta": "0:01:32.357744", "end": "2019-12-04 15:11:42.604982", "msg": "non-zero return code", "rc": 1, "start": "2019-12-04 15:10:10.247238", "stderr": "\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.6. Latest validated version: 18.06\nerror execution phase preflight: [preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`", "stderr_lines": ["\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.6. Latest validated version: 18.06", "error execution phase preflight: [preflight] Some fatal errors occurred:", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)", ", error: exit status 1", "[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`"], "stdout": "[init] Using Kubernetes version: v1.13.5\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "stdout_lines": ["[init] Using Kubernetes version: v1.13.5", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"]} Intel Deutschland GmbH Registered Address: Am Campeon 10-12, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Christin Eisenschmid, Gary Kershaw Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Dec 5 08:11:14 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 5 Dec 2019 08:11:14 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting In-Reply-To: References: <9700A18779F35F49AF027300A49E7C76608E9D6B@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F54CC@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608F6268@SHSMSX105.ccr.corp.intel.com> Hi Frank, Glad to hear that. I have rebased all my patches to latest. And your link is correct. Feel free to contact me if you have any question with it. Thanks. Best Regards Shuicheng From: Miller, Frank Sent: Thursday, December 5, 2019 6:07 AM To: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the updates. We would like to take out your changes for KATA containers for a test. Can you rebase your commits and let me know if these are all of the commits: https://review.opendev.org/#/q/topic:kata+(status:open) Once you have rebased we'll create a designer build and run a few tests and let you know if we find anything that needs to be addressed. Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, December 01, 2019 9:15 PM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I try to run busybox with kata containers by k8s, and it could run successfully in IPv6 environment. Best Regards Shuicheng From: Miller, Frank > Sent: Saturday, November 30, 2019 4:03 AM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the update. It looks like stx-openstack has not yet been tested with IPv6. But we have been testing IPv6 with kubernetes platform only and simple k8s apps. Can you confirm kata containers is working with IPv6 when stx-openstack is not applied/not used? Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, November 29, 2019 12:48 AM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I created below LP for the IPv6 deployment issue I meet. Could you help check whether IPv6 deployment is verfied before and share me the BKM for it if there is? Thanks. https://bugs.launchpad.net/starlingx/+bug/1854316 Best Regards Shuicheng From: Miller, Frank > Sent: Tuesday, November 26, 2019 11:37 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting Abbreviated minutes: Next meeting: Tuesday Dec 10 Minutes: 1. Stx.3.0 gating LPs: * Plan for the current 18 gating LPs: * 4 LPs are expected to land for stx.3.0 including the 2 Highs * 2 LPs to be marked invalid/not reproducible * 11 LPs to be re-gated to stx.4.0 * 1 LP TBD (Erich Cordoba to update 1824881) 2. Stx.4.0 features: In features: * 2006145: Kata container support [Shuicheng Lin] --> resourced and In for stx.4.0 * 2006537: Decouple Container Applications from Platform [Bob Church] --> resourced and In for stx.4.0 * 2006770: Backup & Restore - openstack [Ovidiu Poncea] --> resourced and In for stx.4.0 * 2005312: Containerize Openstack clients --> In for now but requires plan * TBD: Upversion Kubernetes and container platform components --> haven't create SB yet but will be required during stx.4.0 NOT In features: * 2006787: Smaller memory node support [Austin Sun] --> not committed for stx.4.0 but being worked on for stx.4.0 (ie: prep) * 2004008: Fault Containerization --> not In because it requires splitting GUI plugin into 2: one with shared panels, the other with the platform panels which is not resourced Etherpad with full minutes: https://etherpad.openstack.org/p/stx-containerization Frank -----Original Appointment----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, November 25, 2019 3:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Containerization Meeting When: Tuesday, November 26, 2019 9:30 AM-10:00 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 Please join me for the bi-weekly containers meeting. Agenda for November 26 meeting: 1. stx.3.0 gating work items: 18 gating LPs (down from 26 at our last meeting) * Status update for high priority LPs (2): * https://bugs.launchpad.net/starlingx/+bug/1838659 kubernetes apiserver certificate needs rotation [Mingyuan Qi] * https://bugs.launchpad.net/starlingx/+bug/1851287 Controller failed to lock following a failover due to elastic pod failure to shutdown [Dan Voiculeasa] * Medium priority LPs (16): * Status for the 4 LPs < 50 days old: * https://bugs.launchpad.net/starlingx/+bug/1851294 [Angie Wang] * https://bugs.launchpad.net/starlingx/+bug/1850438 [Steve Webster] * https://bugs.launchpad.net/starlingx/+bug/1850189 [Stefan Dinescu] * https://bugs.launchpad.net/starlingx/+bug/1846829 [David Sullivan] * Status update for the 12 LPs that >100 days old. [Al, Angie, Bart, Erich, JimG, Ran, Shuicheng, Tao] * Can any be closed as not reproducible or won't fix? * Which ones are being actively worked on? Which ones do the owners have a plan to fix? 2. stx.4.0 planning: * 2006145: Kata container support [Shuicheng Lin] - Request update from Shuicheng if final 2 test scenarios are done (IPv6 testing + external registry with username/pwd authentication) * 2006787: Smaller memory node support, aka containerize flock services [Austin Sun] - Request feature approach & spec update * 2006537: Decouple Container Applications from Platform (stx.4.0 feature) [Bob Church] - Feature status update * Other potential stx.4.0 features --> which are resourced/have plans to address in stx.4.0? * 2006770: Backup & Restore - openstack [Ovidiu Poncea] * 2005312: Containerize Openstack clients * 2004008: Fault Containerization * TBD: Upversion Kubernetes and container platform components Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers << File: ATT00002.txt >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Thu Dec 5 09:14:22 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Thu, 05 Dec 2019 03:14:22 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <029d15$645orv@orsmga008.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191205T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From cindy.xie at intel.com Thu Dec 5 09:52:38 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 5 Dec 2019 09:52:38 +0000 Subject: [Starlingx-discuss] Use Case sharing: use StarlingX as DevOps Infra In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F360F2A81@SHSMSX104.ccr.corp.intel.com> Bruce, Kristal, Can you please check the wiki page from Bin and advise if this is appropriate to move to documentation site? I think the info will be very helpful for those who wants to use StarlingX as CI/CD infra. Thx. - cindy From: Yang, Bin Sent: Wednesday, December 4, 2019 10:01 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] Use Case sharing: use StarlingX as DevOps Infra Hi community friends, As you know, StarlingX has good scalability, which can start from 3 nodes and scale to 100+ nodes. This is perfect for devops use case. A new project might only have a few servers at the beginning and need to scale it in the future. So far, I am using StarlingX as DevOps Infra for our project CI/CD. I'd like to share some BKMs and an example StarlingX App for such use case. Here is the WiKi link. Any comment is welcome. https://wiki.openstack.org/wiki/Use_StarlingX_as_DevOps_Infra thanks, Bin -------------- next part -------------- An HTML attachment was scrubbed... URL: From teshan at vizuamatix.com Thu Dec 5 10:34:05 2019 From: teshan at vizuamatix.com (Teshan Senaratne) Date: Thu, 5 Dec 2019 16:04:05 +0530 Subject: [Starlingx-discuss] StarlingX Baremetal installation Error Message-ID: Hi All, I tried installing starlingX simplex baremetal version on a Dell PowerEdge R810 server. It has more than the specs required, except for the SSDs. It only has 6 x 500GB SATA disks which I configured into 3 arrays of raid 2. Once I started the installation by selecting the bootable pendrive it gives me the following error. Warning: dracut-initqueue timeout - starting timeout scripts ----> (this repeats for a while, and then) Warning: Could not boot. Starting Dracut Emergency Shell... Generating "/run/initramfs/rdsosreport.txt" Entering emergency mode. Exit the shell to continue. dracut:/# Any idea on how to mitigate this? -- Thanks & Regards, *Teshan Senaratne* -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Dec 5 16:16:10 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 5 Dec 2019 16:16:10 +0000 Subject: [Starlingx-discuss] Use Case sharing: use StarlingX as DevOps Infra In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F360F2A81@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F360F2A81@SHSMSX104.ccr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BED37E695@fmsmsx123.amr.corp.intel.com> Cindy, thank you. I think the page so far is a good example of how to setup a generic application on Starlingx. It doesn't provide any specifics about the application - what does it do, what resources does it need, how is it run, etc... So I think this could either be made into a generic "how to install your apps on StarlingX" document, or it could go into more detail on the actual application. For the later, it might be good as a "case study" kind of document - maybe with more details about how to actually operate the application under StarlingX. My $0.02. brucej From: Xie, Cindy Sent: Thursday, December 5, 2019 1:53 AM To: Yang, Bin ; 'starlingx-discuss at lists.starlingx.io' ; Jones, Bruce E ; Dale, Kristal Subject: RE: Use Case sharing: use StarlingX as DevOps Infra Bruce, Kristal, Can you please check the wiki page from Bin and advise if this is appropriate to move to documentation site? I think the info will be very helpful for those who wants to use StarlingX as CI/CD infra. Thx. - cindy From: Yang, Bin > Sent: Wednesday, December 4, 2019 10:01 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] Use Case sharing: use StarlingX as DevOps Infra Hi community friends, As you know, StarlingX has good scalability, which can start from 3 nodes and scale to 100+ nodes. This is perfect for devops use case. A new project might only have a few servers at the beginning and need to scale it in the future. So far, I am using StarlingX as DevOps Infra for our project CI/CD. I'd like to share some BKMs and an example StarlingX App for such use case. Here is the WiKi link. Any comment is welcome. https://wiki.openstack.org/wiki/Use_StarlingX_as_DevOps_Infra thanks, Bin -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Thu Dec 5 16:44:17 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Thu, 5 Dec 2019 16:44:17 +0000 Subject: [Starlingx-discuss] Setup of deployment with mixed sda and nvme disks Message-ID: <7DD3225C-EAA3-4633-9B6D-9EA0F310D5C9@intel.com> Hi all, I started to play around installing StarlingX in a system with NVMe disks. I followed this instructions[0] and I can get an AIO-Simplex system. However I'm wondering that if I want to setup any other configuration that requires additional systems, should these systems have NVMe disks? More specific, is it possible to setup a controller with SDA drives and computes with NVMe ? Thank you [0] https://docs.starlingx.io/deploy_install_guides/nvme_config.html From Alex.Kozyrev at windriver.com Thu Dec 5 19:41:45 2019 From: Alex.Kozyrev at windriver.com (Kozyrev, Alexander (Alex)) Date: Thu, 5 Dec 2019 19:41:45 +0000 Subject: [Starlingx-discuss] How is Barbican being used in StarlingX? In-Reply-To: References: <609ddf91-2cf0-b65a-79e2-982b889d15eb@intel.com> Message-ID: Yong, I don't think we have any tests covering this functionality as of today. keystone-listener is enabled and up and running on StarlingX, that's for sure. Regards, Alex -----Original Message----- From: Yong Hu [mailto:yong.hu at intel.com] Sent: December 4, 2019 19:53 To: Kozyrev, Alexander (Alex) ; Sun, Yicheng (Jerry) ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How is Barbican being used in StarlingX? Hi Alex and Jerry, Thanks much! Your answers addressed my questions mostly. Though, I still wondered whether currently barbican-keystone-listener actually works in StarlingX, because even I turned on the debug flag, I didn't see log output from "class NotificationTask -> process_event" in "barbican/queue/keystone_listener.py", where there is supposedly some debug log about the incoming notification before handling "operation_type == 'deleted'". Have we ever tested this feature before? regards, Yong On 2019/12/5 5:30 AM, Kozyrev, Alexander (Alex) wrote: > Hi Yong, > > 1. We store BMC passwords in Barbican as well as Docker Registry credentials currently. > 2. Keystone listener processes only Keystone project delete events to purge all associated Barbican resources. > 3. Barbican identity in Keystone is used for authentication and multi-tenant authorization of Barbican. > > Regards, > Alex > > -----Original Message----- > From: Sun, Yicheng (Jerry) [mailto:Jerry.Sun at windriver.com] > Sent: December 4, 2019 12:07 > To: Yong Hu ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] How is Barbican being used in StarlingX? > > Hi Yong, > > I am not sure what the entire list of secrets stored in barbican is, but I know if you specify credentials for external docker registry to bootstrap the system with, barbican secrets will be created for those. > > Thanks, > Jerry > > -----Original Message----- > From: Yong Hu > Sent: Tuesday, December 3, 2019 10:34 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] How is Barbican being used in StarlingX? > > Hi folks, > I am working on a LP and might like to use "barbican-keystone-listener" > to monitor a keystone event. > > From "sm-dump", we are seeing "barbican-api", > "barbican-keystone-listener" and "barbican-worker" are > "enabled-active", but I have a few questions about how Barbican (on > host) is being used in > StarlingX: > 1. what secrets are stored in Barbican currently? btw: I got nothing by running "openstack secret list" or "barbican secret list". As well, "admin" and its password are stored in Keyring, not in Barbican. > 2. Does "barbican-keystone-listener" actually work with keystone in StarlingX? I enabled "debug" flag in /etc/barbican/barbican.conf, but when I was triggering events with keystone, it seemed there was no notifications captured by "barbican-keystone-listener". > 3. there is a "barbican" user/identify (with initial password) managed by keystone, but I wonder what we used this "barbican" for StarlingX. > > > thanks in advance! > > -Yong > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Jerry.Sun at windriver.com Thu Dec 5 20:01:57 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Thu, 5 Dec 2019 20:01:57 +0000 Subject: [Starlingx-discuss] New repo for Windows Active Directory Message-ID: Hi Saul, As a part of the windows active directory story, a new repo needs to be created for the new dex application. The spec is https://review.opendev.org/#/c/695042/ . Can you create this for me? I would like to have the repo called "oidc-auth-armada-app" and the primes as Bob Church, Greg Waines, and Chris Friesen. Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Thu Dec 5 20:31:28 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 5 Dec 2019 12:31:28 -0800 Subject: [Starlingx-discuss] New repo for Windows Active Directory In-Reply-To: References: Message-ID: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> Which project would this be part of for the governance docs? Sau! On 12/5/19 12:01 PM, Sun, Yicheng (Jerry) wrote: > Hi Saul, > > As a part of the windows active directory story, a new repo needs to be > created for the new dex application. The spec is > https://review.opendev.org/#/c/695042/ . Can you create this for me? I > would like to have the repo called “oidc-auth-armada-app” and the primes > as Bob Church, Greg Waines, and Chris Friesen. > > Thanks, > > Jerry > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From maria.g.perez.ibarra at intel.com Thu Dec 5 23:36:17 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 5 Dec 2019 23:36:17 +0000 Subject: [Starlingx-discuss] [Final Regression testing - stx3.0 ] Report for 12/05/2019 Message-ID: StarlingX 3.0 Release Status: ISO: BUILD_ID=" 20191203T021136Z" from (link) ---------------------------------------------------------------------- MANUAL FINAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 69 Pass = 25 Fail = 0 Blocked = 2 Not Run = 42 Obsolete = 0 Deferred = 0 Total executed = 25 Progress = 39% Pass Rate = 100% Formula used : Pass Rate = (pass * 100) / (pass + fail) --------------------------------------------------------------------------- AUTOMATED ROBOT FINAL EXECUTION --------------------------------------------------------------------------- Overall Results: Total = 50 Pass = 47 Fail = 3 Blocked = 0 Obsolete = 0 Deferred = 0 Not Valid = 0 Total executed = 50 Progress = 100% Pass Rate = 100% Formula used : Pass Rate = pass * 100 / (pass + fail) --------------------------------------------------------------------------- AUTOMATED PYTEST FINAL EXECUTION --------------------------------------------------------------------------- Waiting WR results. --------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1X085xI96M6PIeum87w6IEF11G-TG-W1OeGuAJWnrAWI/edit#gid=1717644237 -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Dec 5 23:46:01 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 5 Dec 2019 23:46:01 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20191205 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-5 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] List of docker images : http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007188.html regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Fri Dec 6 08:33:56 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 6 Dec 2019 08:33:56 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting In-Reply-To: <9700A18779F35F49AF027300A49E7C76608F6268@SHSMSX105.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C76608E9D6B@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F54CC@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F6268@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608F668B@SHSMSX105.ccr.corp.intel.com> Hi Frank, 1 more patch [0] is uploaded today. So there are 8 patches in total. With this patch, token server supports POST method for token fetch, so the WA in containerd is removed. [0]: https://review.opendev.org/697601 Best Regards Shuicheng From: Lin, Shuicheng Sent: Thursday, December 5, 2019 4:11 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, Glad to hear that. I have rebased all my patches to latest. And your link is correct. Feel free to contact me if you have any question with it. Thanks. Best Regards Shuicheng From: Miller, Frank > Sent: Thursday, December 5, 2019 6:07 AM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the updates. We would like to take out your changes for KATA containers for a test. Can you rebase your commits and let me know if these are all of the commits: https://review.opendev.org/#/q/topic:kata+(status:open) Once you have rebased we'll create a designer build and run a few tests and let you know if we find anything that needs to be addressed. Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, December 01, 2019 9:15 PM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I try to run busybox with kata containers by k8s, and it could run successfully in IPv6 environment. Best Regards Shuicheng From: Miller, Frank > Sent: Saturday, November 30, 2019 4:03 AM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the update. It looks like stx-openstack has not yet been tested with IPv6. But we have been testing IPv6 with kubernetes platform only and simple k8s apps. Can you confirm kata containers is working with IPv6 when stx-openstack is not applied/not used? Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, November 29, 2019 12:48 AM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I created below LP for the IPv6 deployment issue I meet. Could you help check whether IPv6 deployment is verfied before and share me the BKM for it if there is? Thanks. https://bugs.launchpad.net/starlingx/+bug/1854316 Best Regards Shuicheng From: Miller, Frank > Sent: Tuesday, November 26, 2019 11:37 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting Abbreviated minutes: Next meeting: Tuesday Dec 10 Minutes: 1. Stx.3.0 gating LPs: * Plan for the current 18 gating LPs: * 4 LPs are expected to land for stx.3.0 including the 2 Highs * 2 LPs to be marked invalid/not reproducible * 11 LPs to be re-gated to stx.4.0 * 1 LP TBD (Erich Cordoba to update 1824881) 2. Stx.4.0 features: In features: * 2006145: Kata container support [Shuicheng Lin] --> resourced and In for stx.4.0 * 2006537: Decouple Container Applications from Platform [Bob Church] --> resourced and In for stx.4.0 * 2006770: Backup & Restore - openstack [Ovidiu Poncea] --> resourced and In for stx.4.0 * 2005312: Containerize Openstack clients --> In for now but requires plan * TBD: Upversion Kubernetes and container platform components --> haven't create SB yet but will be required during stx.4.0 NOT In features: * 2006787: Smaller memory node support [Austin Sun] --> not committed for stx.4.0 but being worked on for stx.4.0 (ie: prep) * 2004008: Fault Containerization --> not In because it requires splitting GUI plugin into 2: one with shared panels, the other with the platform panels which is not resourced Etherpad with full minutes: https://etherpad.openstack.org/p/stx-containerization Frank -----Original Appointment----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, November 25, 2019 3:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Containerization Meeting When: Tuesday, November 26, 2019 9:30 AM-10:00 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 Please join me for the bi-weekly containers meeting. Agenda for November 26 meeting: 1. stx.3.0 gating work items: 18 gating LPs (down from 26 at our last meeting) * Status update for high priority LPs (2): * https://bugs.launchpad.net/starlingx/+bug/1838659 kubernetes apiserver certificate needs rotation [Mingyuan Qi] * https://bugs.launchpad.net/starlingx/+bug/1851287 Controller failed to lock following a failover due to elastic pod failure to shutdown [Dan Voiculeasa] * Medium priority LPs (16): * Status for the 4 LPs < 50 days old: * https://bugs.launchpad.net/starlingx/+bug/1851294 [Angie Wang] * https://bugs.launchpad.net/starlingx/+bug/1850438 [Steve Webster] * https://bugs.launchpad.net/starlingx/+bug/1850189 [Stefan Dinescu] * https://bugs.launchpad.net/starlingx/+bug/1846829 [David Sullivan] * Status update for the 12 LPs that >100 days old. [Al, Angie, Bart, Erich, JimG, Ran, Shuicheng, Tao] * Can any be closed as not reproducible or won't fix? * Which ones are being actively worked on? Which ones do the owners have a plan to fix? 2. stx.4.0 planning: * 2006145: Kata container support [Shuicheng Lin] - Request update from Shuicheng if final 2 test scenarios are done (IPv6 testing + external registry with username/pwd authentication) * 2006787: Smaller memory node support, aka containerize flock services [Austin Sun] - Request feature approach & spec update * 2006537: Decouple Container Applications from Platform (stx.4.0 feature) [Bob Church] - Feature status update * Other potential stx.4.0 features --> which are resourced/have plans to address in stx.4.0? * 2006770: Backup & Restore - openstack [Ovidiu Poncea] * 2005312: Containerize Openstack clients * 2004008: Fault Containerization * TBD: Upversion Kubernetes and container platform components Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers << File: ATT00002.txt >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Fri Dec 6 09:09:44 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Fri, 06 Dec 2019 03:09:44 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <145d7b$c8rtuq@fmsmga005.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191206T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From bin.yang at intel.com Fri Dec 6 09:37:05 2019 From: bin.yang at intel.com (Yang, Bin) Date: Fri, 6 Dec 2019 09:37:05 +0000 Subject: [Starlingx-discuss] Use Case sharing: use StarlingX as DevOps Infra In-Reply-To: <9A85D2917C58154C960D95352B22818BED37E695@fmsmsx123.amr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F360F2A81@SHSMSX104.ccr.corp.intel.com> <9A85D2917C58154C960D95352B22818BED37E695@fmsmsx123.amr.corp.intel.com> Message-ID: Hi Bruce, I add the design diagram section in the wiki page. The details of Jenkins, docker image build and local registry are not included in this wiki page. Audiences can read the documents from their official websites. Thanks, Bin From: Jones, Bruce E Sent: Friday, December 6, 2019 00:16 To: Xie, Cindy ; Yang, Bin ; Dale, Kristal Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Use Case sharing: use StarlingX as DevOps Infra Cindy, thank you. I think the page so far is a good example of how to setup a generic application on Starlingx. It doesn't provide any specifics about the application - what does it do, what resources does it need, how is it run, etc... So I think this could either be made into a generic "how to install your apps on StarlingX" document, or it could go into more detail on the actual application. For the later, it might be good as a "case study" kind of document - maybe with more details about how to actually operate the application under StarlingX. My $0.02. brucej From: Xie, Cindy Sent: Thursday, December 5, 2019 1:53 AM To: Yang, Bin >; 'starlingx-discuss at lists.starlingx.io' >; Jones, Bruce E >; Dale, Kristal > Subject: RE: Use Case sharing: use StarlingX as DevOps Infra Bruce, Kristal, Can you please check the wiki page from Bin and advise if this is appropriate to move to documentation site? I think the info will be very helpful for those who wants to use StarlingX as CI/CD infra. Thx. - cindy From: Yang, Bin > Sent: Wednesday, December 4, 2019 10:01 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] Use Case sharing: use StarlingX as DevOps Infra Hi community friends, As you know, StarlingX has good scalability, which can start from 3 nodes and scale to 100+ nodes. This is perfect for devops use case. A new project might only have a few servers at the beginning and need to scale it in the future. So far, I am using StarlingX as DevOps Infra for our project CI/CD. I'd like to share some BKMs and an example StarlingX App for such use case. Here is the WiKi link. Any comment is welcome. https://wiki.openstack.org/wiki/Use_StarlingX_as_DevOps_Infra thanks, Bin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Fri Dec 6 13:43:45 2019 From: Greg.Waines at windriver.com (Waines, Greg) Date: Fri, 6 Dec 2019 13:43:45 +0000 Subject: [Starlingx-discuss] New repo for Windows Active Directory In-Reply-To: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> References: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> Message-ID: Security From: Saul Wold Date: Thursday, December 5, 2019 at 3:33 PM To: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Which project would this be part of for the governance docs? Sau! On 12/5/19 12:01 PM, Sun, Yicheng (Jerry) wrote: Hi Saul, As a part of the windows active directory story, a new repo needs to be created for the new dex application. The spec is https://review.opendev.org/#/c/695042/ . Can you create this for me? I would like to have the repo called “oidc-auth-armada-app” and the primes as Bob Church, Greg Waines, and Chris Friesen. Thanks, Jerry _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dariush.Eslimi at windriver.com Fri Dec 6 13:43:56 2019 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Fri, 6 Dec 2019 13:43:56 +0000 Subject: [Starlingx-discuss] New repo for Windows Active Directory In-Reply-To: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> References: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> Message-ID: I think should be under security, I like others to chime in. Dariush -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: December-05-19 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Which project would this be part of for the governance docs? Sau! On 12/5/19 12:01 PM, Sun, Yicheng (Jerry) wrote: > Hi Saul, > > As a part of the windows active directory story, a new repo needs to > be created for the new dex application. The spec is > https://review.opendev.org/#/c/695042/ . Can you create this for me? I > would like to have the repo called “oidc-auth-armada-app” and the > primes as Bob Church, Greg Waines, and Chris Friesen. > > Thanks, > > Jerry > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Brent.Rowsell at windriver.com Fri Dec 6 13:51:50 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Fri, 6 Dec 2019 13:51:50 +0000 Subject: [Starlingx-discuss] New repo for Windows Active Directory In-Reply-To: References: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC27EA6F4@ALA-MBD.corp.ad.wrs.com> This is for k8s api a&a, should go under the containers sub-project. Brent -----Original Message----- From: Eslimi, Dariush [mailto:Dariush.Eslimi at windriver.com] Sent: Friday, December 6, 2019 8:44 AM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory I think should be under security, I like others to chime in. Dariush -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: December-05-19 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Which project would this be part of for the governance docs? Sau! On 12/5/19 12:01 PM, Sun, Yicheng (Jerry) wrote: > Hi Saul, > > As a part of the windows active directory story, a new repo needs to > be created for the new dex application. The spec is > https://review.opendev.org/#/c/695042/ . Can you create this for me? I > would like to have the repo called “oidc-auth-armada-app” and the > primes as Bob Church, Greg Waines, and Chris Friesen. > > Thanks, > > Jerry > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Jerry.Sun at windriver.com Fri Dec 6 14:37:50 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Fri, 6 Dec 2019 14:37:50 +0000 Subject: [Starlingx-discuss] New repo for Windows Active Directory In-Reply-To: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> References: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> Message-ID: We would like this to be under containers. Thanks, Jerry -----Original Message----- From: Saul Wold Sent: Thursday, December 5, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Which project would this be part of for the governance docs? Sau! On 12/5/19 12:01 PM, Sun, Yicheng (Jerry) wrote: > Hi Saul, > > As a part of the windows active directory story, a new repo needs to > be created for the new dex application. The spec is > https://review.opendev.org/#/c/695042/ . Can you create this for me? I > would like to have the repo called “oidc-auth-armada-app” and the > primes as Bob Church, Greg Waines, and Chris Friesen. > > Thanks, > > Jerry > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Fri Dec 6 14:42:33 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 6 Dec 2019 14:42:33 +0000 Subject: [Starlingx-discuss] stx.3.0 Final Compile Candidate Planned for Dec 11 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1601CE4@ALA-MBD.corp.ad.wrs.com> Hello all, The starlingx release team agreed yesterday that the stx.3.0 final compile candidate build will be targeted for Wednesday Dec 11. There are currently 43 bugs tagged for stx.3.0: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.3.0 with 11 bugs with Critical / Major priority. Please continue to work the gating bugs and remember to cherrypick to the r/stx.3.0 branch. Another email will be sent out when the r/stx.3.0 branch is closed for submission in preparation for the release tagging, etc. Regards, Ghada From Jerry.Sun at windriver.com Fri Dec 6 17:03:05 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Fri, 6 Dec 2019 17:03:05 +0000 Subject: [Starlingx-discuss] New repo for Windows Active Directory In-Reply-To: References: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> Message-ID: Oops, did not see the other peoples emails in the same thread, pretend I didn't say anything. -----Original Message----- From: Sun, Yicheng (Jerry) Sent: Friday, December 6, 2019 9:38 AM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Cc: Rowsell, Brent Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory We would like this to be under containers. Thanks, Jerry -----Original Message----- From: Saul Wold Sent: Thursday, December 5, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Which project would this be part of for the governance docs? Sau! On 12/5/19 12:01 PM, Sun, Yicheng (Jerry) wrote: > Hi Saul, > > As a part of the windows active directory story, a new repo needs to > be created for the new dex application. The spec is > https://review.opendev.org/#/c/695042/ . Can you create this for me? I > would like to have the repo called “oidc-auth-armada-app” and the > primes as Bob Church, Greg Waines, and Chris Friesen. > > Thanks, > > Jerry > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Fri Dec 6 17:52:55 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 6 Dec 2019 17:52:55 +0000 Subject: [Starlingx-discuss] New repo for Windows Active Directory In-Reply-To: References: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0C1602132@ALA-MBD.corp.ad.wrs.com> Hi Saul, It appears there are two schools of thought here (containers or security), but given this is directly tied to the k8s API authentication, it should be under the containers sub-project as suggested by Brent and Jerry. This is in line with keystone being under distro.openstack. Sorry for the back & forth. Regards, Ghada -----Original Message----- From: Sun, Yicheng (Jerry) [mailto:Jerry.Sun at windriver.com] Sent: Friday, December 06, 2019 12:03 PM To: Sun, Yicheng (Jerry); Saul Wold; starlingx-discuss at lists.starlingx.io Cc: Rowsell, Brent Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Oops, did not see the other peoples emails in the same thread, pretend I didn't say anything. -----Original Message----- From: Sun, Yicheng (Jerry) Sent: Friday, December 6, 2019 9:38 AM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Cc: Rowsell, Brent Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory We would like this to be under containers. Thanks, Jerry -----Original Message----- From: Saul Wold Sent: Thursday, December 5, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Which project would this be part of for the governance docs? Sau! On 12/5/19 12:01 PM, Sun, Yicheng (Jerry) wrote: > Hi Saul, > > As a part of the windows active directory story, a new repo needs to > be created for the new dex application. The spec is > https://review.opendev.org/#/c/695042/ . Can you create this for me? I > would like to have the repo called “oidc-auth-armada-app” and the > primes as Bob Church, Greg Waines, and Chris Friesen. > > Thanks, > > Jerry > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Fri Dec 6 18:32:02 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 6 Dec 2019 18:32:02 +0000 Subject: [Starlingx-discuss] New repo for Windows Active Directory In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C1602132@ALA-MBD.corp.ad.wrs.com> References: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> <151EE31B9FCCA54397A757BC674650F0C1602132@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BED3805DC@fmsmsx123.amr.corp.intel.com> I agree. Speaking as a member of the security team, currently the team doesn't actually own any code. It makes more sense to me to have this land within an existing development project. brucej -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, December 6, 2019 9:53 AM To: Sun, Yicheng (Jerry) ; Sun, Yicheng (Jerry) ; Saul Wold ; starlingx-discuss at lists.starlingx.io Cc: Rowsell, Brent Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Hi Saul, It appears there are two schools of thought here (containers or security), but given this is directly tied to the k8s API authentication, it should be under the containers sub-project as suggested by Brent and Jerry. This is in line with keystone being under distro.openstack. Sorry for the back & forth. Regards, Ghada -----Original Message----- From: Sun, Yicheng (Jerry) [mailto:Jerry.Sun at windriver.com] Sent: Friday, December 06, 2019 12:03 PM To: Sun, Yicheng (Jerry); Saul Wold; starlingx-discuss at lists.starlingx.io Cc: Rowsell, Brent Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Oops, did not see the other peoples emails in the same thread, pretend I didn't say anything. -----Original Message----- From: Sun, Yicheng (Jerry) Sent: Friday, December 6, 2019 9:38 AM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Cc: Rowsell, Brent Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory We would like this to be under containers. Thanks, Jerry -----Original Message----- From: Saul Wold Sent: Thursday, December 5, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory Which project would this be part of for the governance docs? Sau! On 12/5/19 12:01 PM, Sun, Yicheng (Jerry) wrote: > Hi Saul, > > As a part of the windows active directory story, a new repo needs to > be created for the new dex application. The spec is > https://review.opendev.org/#/c/695042/ . Can you create this for me? I > would like to have the repo called “oidc-auth-armada-app” and the > primes as Bob Church, Greg Waines, and Chris Friesen. > > Thanks, > > Jerry > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Fri Dec 6 18:40:49 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 6 Dec 2019 10:40:49 -0800 Subject: [Starlingx-discuss] New repo for Windows Active Directory In-Reply-To: References: <504ef79c-733c-3309-87d3-ba4dd06f883b@linux.intel.com> Message-ID: <9dd24608-92cc-c0e5-c326-6a0d1fd08e2a@linux.intel.com> The repo has been created here: https://opendev.org/starlingx/oidc-auth-armada-app The group has been created with Bob being the initial member, he can add other members to the group. I still need a decision on the project: containers vs security. It seems to me that containers is probably a better overall place for it as security is a closed group primarily discussion CVEs and other security related issues that requires a smaller group. Once the decision is made, someone from that team could do the update to the governance repo, it does not need to be me at this point. Sau! On 12/6/19 9:03 AM, Sun, Yicheng (Jerry) wrote: > Oops, did not see the other peoples emails in the same thread, pretend I didn't say anything. > > -----Original Message----- > From: Sun, Yicheng (Jerry) > Sent: Friday, December 6, 2019 9:38 AM > To: Saul Wold ; starlingx-discuss at lists.starlingx.io > Cc: Rowsell, Brent > Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory > > We would like this to be under containers. > > Thanks, > Jerry > > -----Original Message----- > From: Saul Wold > Sent: Thursday, December 5, 2019 3:31 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] New repo for Windows Active Directory > > > Which project would this be part of for the governance docs? > > Sau! > > > > On 12/5/19 12:01 PM, Sun, Yicheng (Jerry) wrote: >> Hi Saul, >> >> As a part of the windows active directory story, a new repo needs to >> be created for the new dex application. The spec is >> https://review.opendev.org/#/c/695042/ . Can you create this for me? I >> would like to have the repo called “oidc-auth-armada-app” and the >> primes as Bob Church, Greg Waines, and Chris Friesen. >> >> Thanks, >> >> Jerry >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From maria.g.perez.ibarra at intel.com Fri Dec 6 20:13:05 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 6 Dec 2019 20:13:05 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191206 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-6 (link) Status: GREEN Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] We do not have the baremetal results due to internal network problems. regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Fri Dec 6 20:16:53 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 6 Dec 2019 20:16:53 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191206 Message-ID: Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-06 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From ji at sibyl.li Sat Dec 7 02:31:42 2019 From: ji at sibyl.li (Austin Gillmann) Date: Fri, 6 Dec 2019 20:31:42 -0600 Subject: [Starlingx-discuss] Configuring Openstack SSL Certificates Message-ID: Hello all, First off thank you Robert and Martin for the assistance re: my question about enabling swift. I found about the helm chart overrides on my own but forgot about the extra service. I since successfully deployed and all is working as intended, one minor question is how would I go about adding ssl certificates to the Openstack API's and Horizon. I found a stub page relating to it, but no other references except for stx-config docs that may just be for platform services. Do let me know and thanks again; have a great weekend everyone! -Austin Gillmann From cristopher.j.lemus.contreras at intel.com Sun Dec 8 09:04:49 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Sun, 08 Dec 2019 03:04:49 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: List of docker images required for "platform-integ-apps": BUILD_ID="20191208T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From ezpeerchen at gmail.com Mon Dec 9 03:39:32 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Mon, 9 Dec 2019 11:39:32 +0800 Subject: [Starlingx-discuss] StarlingX support dpdk with VPP ? Message-ID: Dear all, How could i configure StarlingX to enable dpdk + VPP (Vector Packet Processing) ? Or It is not supported now ? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Mon Dec 9 09:09:20 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Mon, 09 Dec 2019 03:09:20 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <3cea8d$791s8q@fmsmga002.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191209T000000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Volker.Hoesslin at swsn.de Mon Dec 9 09:33:08 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Mon, 9 Dec 2019 09:33:08 +0000 Subject: [Starlingx-discuss] data-network down Message-ID: Hi, my STX 2.0 cluster is up and running (2x controller, 2x worker, 3x storage). but one of my worker have an problem, i can not fix it. after time (i do not know in detail, in most times over night) the data-network from all VMs are broken down, no connection over data-network are available any more. the other worker is running fine without any problems. if i live migrate the VMs from one to another worker and back again, the data-network is comming up and all is fine again. i do not find any logs that will explain this problem, any suggestions or hints for locking for? greez & thx, volker. Senden -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Mon Dec 9 11:48:19 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 9 Dec 2019 11:48:19 +0000 Subject: [Starlingx-discuss] StarlingX support dpdk with VPP ? In-Reply-To: References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC27F1425@ALA-MBD.corp.ad.wrs.com> Hello, This is not supported on StarlingX. If this is something you are interested in working on, I would suggest engaging the networking sub-project team. Brent From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Sunday, December 8, 2019 10:40 PM To: starlingx-discuss Subject: [Starlingx-discuss] StarlingX support dpdk with VPP ? Dear all, How could i configure StarlingX to enable dpdk + VPP (Vector Packet Processing) ? Or It is not supported now ? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Dec 9 14:21:16 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 9 Dec 2019 14:21:16 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - Dec 5/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0C16024FF@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases stx.3.0 - Feature Test - Re-test for GPU & QAT is complete. This concludes feature testing for stx.3.0 - Regression Test - r/stx.3.0 ISO became available a little later than expected, so Final Regression will go beyond Dec 5. - Expect to have 60% done by Dec 5, but will continue into next week. - Bugs - Total critical/high/medium: 46 -- https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0 - Critical/High - 12 ...but some are more than 100 days, so we will likely downgrade their priority - Agreed that the final compile target would be Dec 11 - Critical/High priority bugs that are not solved will go into a maintenance release - Medium priority bugs will be downgraded if > 100 days or moved to stx.4.0 - Docs - Ghada to send out a draft release notes - Ask Kristal about the docs status for stx.3.0 -- doc initiatives can continue beyond the release dates stx.4.0 - Release Plan: - https://docs.google.com/spreadsheets/d/1a93wt0XO0_JvajnPzQwnqFkXfdDysKVnHpbrEc17_yg/edit#gid=1107209846 - Went through each feature candidate on the list above and updated resourcing / status - Currently 19 out of 31 items are resourced - Discussed having the PLs identify if there are items they should request help for from the community - Options is to keep a list of items on the wiki or send to the starlingx mailing list - Unit Test Initiative - Bill Zvonar is reaching out to the PLs to determine which domains this is applicable to. Once each team determines what's applicable to their code base, they can define what is a tangible improvement they will target for stx.4.0. - For reference, email highlighting the UT initiative for the config domain: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-October/006710.html - Domains / PLs to track: - Containers: Frank/Bob - Distcloud: Dariush/Bart - Flock Services: Dariush/Bart - Networking: Huifeng/Matt - Security: Ghada/Victor - Config: Dariush/John - Distro non-OpenStack: Austin/Saul - Distro OpenStack: Yong - Regression & Performance Test Automation Initiatives - Bruce took the action to follow up with Ada on the plan related to test automation and testing in the open. - Testing in the open is a key item required for project confirmation in May 2020 From Frank.Miller at windriver.com Mon Dec 9 16:52:06 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 9 Dec 2019 16:52:06 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Meeting Message-ID: Please join me for the bi-weekly containers meeting. Agenda for December 10 meeting: 1. Unit Test strategy for containers repos - brainstorm session by all of us Reference from the config subproject: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-October/006710.html Questions to discuss: - What level/type of automation testing would provide most benefit to the sub-project? - applicability, address current lack of coverage, other considerations - What area of the sub-project's code is most in need of additional coverage? - heat map of issues, rate of change, desire to 'open' to non-experts - Where should the sub-project invest to increase/maintain quality while enabling contributions? - What work is required - example testcases, documentation, test infrastructure to lower the bar for writing effective unit tests? 2. Review/discussion on Mingyuan's new user application generation tool: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007159.html 3. Any last stx.3.0 gating items? - Intel GPU & QAT plugin SBs still have tasks open - these should be closed or split into a small stx.4.0 SB - 2 High LPs & 5 medium LPs: great progress by all to reduce the # of LPs (down from 18 at our last meeting) 4. Update on stx.4.0 & other features: - SB link: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.containers&tags=stx.4.0&project_group_id=86 - Updates on a subset: - 2005527: CEPH Containerization in StarlingX [Tingjie Chen] - 2006145: Kata container support [Shuicheng Lin] - 2006537: Decouple Container Applications from Platform [Bob Church] - 2006787: Smaller memory node support, aka containerize flock services [Austin Sun] Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4122 bytes Desc: not available URL: From Matt.Peters at windriver.com Mon Dec 9 18:38:25 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Mon, 9 Dec 2019 18:38:25 +0000 Subject: [Starlingx-discuss] data-network down Message-ID: <74C7C868-5BA2-4373-BE3F-D0ADE2BC18EF@windriver.com> Hello, Are you running with containerized OVS (system vswitch_type=none) or OVS-DPDK on the host (system vswitch_type=ovs-dpdk)? In either case, I would examine the logs from OVS to see if there are any errors being generated for the host that is exhibiting the failure. In addition, you can check your infrastructure data network configuration to ensure it is setup as you expected by reviewing the LLDP information for the host (system host-lldp-neighbor-list) or the graphical display on the dashboard. -Matt From: "von Hoesslin, Volker" Date: Monday, December 9, 2019 at 4:34 AM To: "'starlingx-discuss at lists.starlingx.io'" Subject: [Starlingx-discuss] data-network down Hi, my STX 2.0 cluster is up and running (2x controller, 2x worker, 3x storage). but one of my worker have an problem, i can not fix it. after time (i do not know in detail, in most times over night) the data-network from all VMs are broken down, no connection over data-network are available any more. the other worker is running fine without any problems. if i live migrate the VMs from one to another worker and back again, the data-network is comming up and all is fine again. i do not find any logs that will explain this problem, any suggestions or hints for locking for? greez & thx, volker. Senden -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Mon Dec 9 23:29:46 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 9 Dec 2019 23:29:46 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191209 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-9 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] List of docker images : http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007215.html regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Mon Dec 9 23:35:20 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 9 Dec 2019 23:35:20 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191209 Message-ID: Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-09 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Mon Dec 9 23:53:16 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Mon, 09 Dec 2019 17:53:16 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: List of docker images required for "platform-integ-apps": BUILD_ID="r/stx.3.0" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Volker.Hoesslin at swsn.de Tue Dec 10 07:54:38 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Tue, 10 Dec 2019 07:54:38 +0000 Subject: [Starlingx-discuss] data-network down In-Reply-To: <74C7C868-5BA2-4373-BE3F-D0ADE2BC18EF@windriver.com> References: <74C7C868-5BA2-4373-BE3F-D0ADE2BC18EF@windriver.com> Message-ID: Hi, my OVS is configured like the recommendation from the installations guide: system vswitch_type=none the lldp discovery shows currently the expected behavior, I will recheck on next error occurs. Btw, where can see OVS error logs, I’m not sure witch one of these billions of logs I can focus on for OVS errors… Volker. Von: Peters, Matt [mailto:Matt.Peters at windriver.com] Gesendet: Montag, 9. Dezember 2019 19:38 An: von Hoesslin, Volker; 'starlingx-discuss at lists.starlingx.io' Betreff: Re: [Starlingx-discuss] data-network down Hello, Are you running with containerized OVS (system vswitch_type=none) or OVS-DPDK on the host (system vswitch_type=ovs-dpdk)? In either case, I would examine the logs from OVS to see if there are any errors being generated for the host that is exhibiting the failure. In addition, you can check your infrastructure data network configuration to ensure it is setup as you expected by reviewing the LLDP information for the host (system host-lldp-neighbor-list) or the graphical display on the dashboard. -Matt From: "von Hoesslin, Volker" Date: Monday, December 9, 2019 at 4:34 AM To: "'starlingx-discuss at lists.starlingx.io'" Subject: [Starlingx-discuss] data-network down Hi, my STX 2.0 cluster is up and running (2x controller, 2x worker, 3x storage). but one of my worker have an problem, i can not fix it. after time (i do not know in detail, in most times over night) the data-network from all VMs are broken down, no connection over data-network are available any more. the other worker is running fine without any problems. if i live migrate the VMs from one to another worker and back again, the data-network is comming up and all is fine again. i do not find any logs that will explain this problem, any suggestions or hints for locking for? greez & thx, volker. Senden -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Dec 10 08:46:30 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 10 Dec 2019 08:46:30 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 12/11/2019 Message-ID: Agenda for 12/11 meeting: * stx.4.0 feature - Standardize Flock Package Versioning (JITStack Daniels) - Kata Container (Shuicheng) - CentOS 8.0 upgrade planning (Shuai Zhao) SRPM, tarball : Container Build : * stx 3.0 bugs fix - CVE issue tracking (Shuicheng) * OVMF * kernel change upgrade 1062 - Storage issue tracking (Tingjie) 4-Medium https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.storage LP#1826886 cinder cmd not working intermittently ----Ma zheng LP#1844164 alarm 800.001 raised on lock storage-0 and not cleared when storage-0 unlocks --- Martin LP#1847336 IPv6 Distributed Cloud: ansible-playbook 'Wipe ceph osds' does not support re-play / re-entrance ---- Stefan LP#1848198 Glance backend present on non-openstack deployment ---- Stefan Dinescu - Others issue tracking (Austin) https://bugs.launchpad.net/starlingx/+bug/1847335 * Opens (All) Update the agenda if other topic to be discussed : https://etherpad.openstack.org/p/stx-distro-other Thanks. BR Austin Sun. From cristopher.j.lemus.contreras at intel.com Tue Dec 10 09:01:45 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Tue, 10 Dec 2019 03:01:45 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: List of docker images required for "platform-integ-apps": BUILD_ID="r/stx.3.0" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From parkeryan at tencent.com Tue Dec 10 11:43:23 2019 From: parkeryan at tencent.com (=?utf-8?B?cGFya2VyeWFuKOmXq+W/l+adsCk=?=) Date: Tue, 10 Dec 2019 11:43:23 +0000 Subject: [Starlingx-discuss] Stx-openstack application doesn't take effect on extra compute nodes. Message-ID: Dear all, After deploying StarlingX 2.0 with stx-openstack in 2+2 mode, I added some more compute nodes, and labeled them with system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled system host-label-assign $NODE sriov=enabled After unlocking these extra compute nodes, I found the stx-openstack application didn’t deploy on them, even if I reapplied the stx-openstack application. Was there some action I should take to trigger the deploying? Here is the detailed log: [sysadmin at controller-0 ~(keystone_admin)]$ system application-list +---------------------+-----------------------------+-------------------------------+--------------------+---------+-----------+ | application | version | manifest name | manifest file | status | progress | +---------------------+-----------------------------+-------------------------------+--------------------+---------+-----------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applied | completed | | stx-openstack | 1.0-17-centos-stable-latest | armada-manifest | stx-openstack.yaml | applied | completed | +---------------------+-----------------------------+-------------------------------+--------------------+---------+-----------+ [sysadmin at controller-0 ~(keystone_admin)]$ [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | compute-0 | worker | unlocked | enabled | available | | 4 | compute-1 | worker | unlocked | enabled | available | | 5 | compute-2 | worker | unlocked | enabled | available | | 6 | compute-3 | worker | unlocked | enabled | available | | 7 | compute-4 | worker | unlocked | enabled | available | | 8 | compute-5 | worker | unlocked | enabled | available | | 9 | compute-6 | worker | unlocked | enabled | available | | 10 | compute-7 | worker | unlocked | enabled | available | | 11 | compute-8 | worker | unlocked | enabled | available | | 12 | compute-9 | worker | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-0 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-0 | openstack-compute-node | enabled | | compute-0 | openvswitch | enabled | | compute-0 | sriov | enabled | | compute-0 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-1 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-1 | openstack-compute-node | enabled | | compute-1 | openvswitch | enabled | | compute-1 | sriov | enabled | | compute-1 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-2 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-2 | openstack-compute-node | enabled | | compute-2 | openvswitch | enabled | | compute-2 | sriov | enabled | | compute-2 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-3 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-3 | openstack-compute-node | enabled | | compute-3 | openvswitch | enabled | | compute-3 | sriov | enabled | | compute-3 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-4 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-4 | openstack-compute-node | enabled | | compute-4 | openvswitch | enabled | | compute-4 | sriov | enabled | | compute-4 | sriovdp | enabled | +-----------+------------------------+-------------+ controller-0:~/sow$ openstack host list +-----------------------------------+-------------+----------+ | Host Name | Service | Zone | +-----------------------------------+-------------+----------+ | nova-consoleauth-67b4db556b-z6s6c | consoleauth | internal | | nova-consoleauth-67b4db556b-w7ztm | consoleauth | internal | | nova-conductor-76d979ff86-6l8ht | conductor | internal | | nova-scheduler-cd946798c-wkjpn | scheduler | internal | | nova-conductor-76d979ff86-l8s94 | conductor | internal | | nova-scheduler-cd946798c-vdmvb | scheduler | internal | | compute-1 | compute | nova | | compute-0 | compute | nova | +-----------------------------------+-------------+----------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Dec 10 15:21:47 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 10 Dec 2019 15:21:47 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Meeting In-Reply-To: References: Message-ID: Minutes from today's meeting: Next meeting will be next week on Dec 17. Then no meeting after that until January. * We did not have quorum to brainstorm the unit test approach. Will re-schedule to Dec 17 meeting. * Mingyuan has created a nice tool to create/build container applications to run on StarlingX. Request the container team members review his commit this week: https://review.opendev.org/#/c/697013/ * For stx.3.0, 2 features still have open tasks. Ran and Mingyuan described the remaining tasks (some are ready to merge) and took the action to create a small stx.4.0 storyboard to move these tasks to stx.4.0. Then the stx.3.0 features can be marked complete: o Intel GPU device plugin: https://storyboard.openstack.org/#!/story/2005937 o QAT device plugin: https://storyboard.openstack.org/#!/story/2005514 * Tingjie gave a good overview of Rook and ceph containerization. One outstanding request is to look at how sysinv is being used and change the design so rook manages the ceph configuration and not sysinv. We'll have a follow-up discussion at the Dec 17 meeting. Frank -----Original Appointment----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, December 09, 2019 11:52 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Containerization Meeting When: Tuesday, December 10, 2019 9:30 AM-10:00 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 Please join me for the bi-weekly containers meeting. Agenda for December 10 meeting: 1. Unit Test strategy for containers repos - brainstorm session by all of us Reference from the config subproject: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-October/006710.html Questions to discuss: - What level/type of automation testing would provide most benefit to the sub-project? - applicability, address current lack of coverage, other considerations - What area of the sub-project's code is most in need of additional coverage? - heat map of issues, rate of change, desire to 'open' to non-experts - Where should the sub-project invest to increase/maintain quality while enabling contributions? - What work is required - example testcases, documentation, test infrastructure to lower the bar for writing effective unit tests? 2. Review/discussion on Mingyuan's new user application generation tool: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007159.html 3. Any last stx.3.0 gating items? - Intel GPU & QAT plugin SBs still have tasks open - these should be closed or split into a small stx.4.0 SB - 2 High LPs & 5 medium LPs: great progress by all to reduce the # of LPs (down from 18 at our last meeting) 4. Update on stx.4.0 & other features: - SB link: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.containers&tags=stx.4.0&project_group_id=86 - Updates on a subset: - 2005527: CEPH Containerization in StarlingX [Tingjie Chen] - 2006145: Kata container support [Shuicheng Lin] - 2006537: Decouple Container Applications from Platform [Bob Church] - 2006787: Smaller memory node support, aka containerize flock services [Austin Sun] Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers << File: ATT00002.txt >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Dec 10 15:27:29 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 10 Dec 2019 15:27:29 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Meeting Message-ID: Just 2 topics for Dec 17th: Agenda for December 17 meeting: 1. Unit Test strategy for containers repos - brainstorm session by all of us Reference from the config subproject: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-October/006710.html Questions to discuss: - What level/type of automation testing would provide most benefit to the sub-project? - applicability, address current lack of coverage, other considerations - What area of the sub-project's code is most in need of additional coverage? - heat map of issues, rate of change, desire to 'open' to non-experts - Where should the sub-project invest to increase/maintain quality while enabling contributions? - What work is required - example testcases, documentation, test infrastructure to lower the bar for writing effective unit tests? 2. Discuss topics for these 2 features: * - 2005527: CEPH Containerization in StarlingX [Tingjie Chen] --> One outstanding request to look at how sysinv is being used and change the design so rook manages the ceph configuration and not sysinv. * - 2006787: Smaller memory node support, aka containerize flock services [Austin Sun] --> review update on feature approach -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2842 bytes Desc: not available URL: From cristopher.j.lemus.contreras at intel.com Tue Dec 10 17:46:03 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Tue, 10 Dec 2019 11:46:03 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <3cea8d$79il6f@fmsmga002.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="r/stx.3.0" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From David.Sullivan at windriver.com Tue Dec 10 19:50:53 2019 From: David.Sullivan at windriver.com (Sullivan, David) Date: Tue, 10 Dec 2019 19:50:53 +0000 Subject: [Starlingx-discuss] [DOCS] Allow configuration of PTP master/slave interfaces Message-ID: As part of this Story there are changes that impact documentation. Let me know if you require further details. Thanks, David --- Allow configuration of PTP master/slave interfaces As part of this story/task we have changed how PTP interfaces are assigned to hosts. PTP master/slave interfaces will not be defined by default. They must be specified by the administrator for each host. A new option, ptp_role has been added to the interface configuration. This option can be specified using --ptp-role with the host-if-modify and host-if-add commands. The ptp_role parameter accepts values of master, slave and none. The master and slave roles are limited to platform, SRIOV and VF interfaces. Any number of master and slave interfaces can be specified per host. If a host has clock_synchronization=ptp there must be at least one host interface with a PTP role specified. This is enforced during host unlock. Note in order to use UDP for PTP transport, each PTP interface must have an IP assigned. This is enforced during host unlock and when switching PTP transport to UDP. Example CLI commands/output: system host-if-add compute-0 ptpif ae eth1000 eth1001 -c platform --ptp-role master system host-if-modify compute-3 ens803f0 -n sriovptp --ptp-role slave system host-if-show compute-3 sriovptp +-----------------+--------------------------------------+ | Property | Value | +-----------------+--------------------------------------+ | ifname | sriovptp | | iftype | ethernet | | ports | [u'ens803f0'] | | imac | 90:e2:ba:ac:70:00 | | imtu | 1500 | | ifclass | pci-sriov | | ptp_role | slave | | aemode | None | | schedpolicy | None | | txhashpolicy | None | | uuid | a7e2558c-6f58-4764-bda6-becbb82ac890 | | ihost_uuid | 4a05a1d4-e0c3-4b9c-970c-21391bdf2462 | | vlan_id | None | | uses | [] | | used_by | [] | | created_at | | | updated_at | | | sriov_numvfs | 8 | | sriov_vf_driver | netdevice | | accelerated | [True] | +-----------------+--------------------------------------+ Reviews: https://review.opendev.org/#/c/696910/ https://review.opendev.org/#/c/696913/ Story: https://storyboard.openstack.org/#!/story/2006759 -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Dec 10 23:05:37 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 10 Dec 2019 23:05:37 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191210 Message-ID: Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-10 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Dec 10 23:33:01 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 10 Dec 2019 23:33:01 +0000 Subject: [Starlingx-discuss] [Final Regression testing - stx3.0 ] Report for 12/10/2019 Message-ID: StarlingX 3.0 Release Status: ISO: BUILD_ID=" 20191205T023000Z" from (link) ---------------------------------------------------------------------- MANUAL FINAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 69 Pass = 35 Fail = 1 Blocked = 6 Not Run = 27 Obsolete = 0 Deferred = 0 Total executed = 36 Progress = 61% Pass Rate = 97.2% Formula used : Pass Rate = (pass * 100) / (pass + fail) --------------------------------------------------------------------------- AUTOMATED ROBOT FINAL EXECUTION --------------------------------------------------------------------------- Overall Results: Total = 50 Pass = 47 Fail = 3 Blocked = 0 Obsolete = 0 Deferred = 0 Not Valid = 0 Total executed = 50 Progress = 100% Pass Rate = 100% Formula used : Pass Rate = pass * 100 / (pass + fail) --------------------------------------------------------------------------- AUTOMATED PYTEST FINAL EXECUTION --------------------------------------------------------------------------- Overall Results: Total = 43 Pass = 1 Fail = 0 Blocked = 0 Obsolete = 0 Deferred = 0 Not Valid = 0 Total executed = 1 Progress = 2% Pass Rate = 100% Formula used : Pass Rate = pass * 100 / (pass + fail) --------------------------------------------------------------------------- BUGS [ironic] pod ironic-manage-cleaning-network failing after helm override https://bugs.launchpad.net/starlingx/+bug/1855319 --------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1X085xI96M6PIeum87w6IEF11G-TG-W1OeGuAJWnrAWI/edit#gid=1717644237 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Wed Dec 11 00:34:31 2019 From: yi.c.wang at intel.com (Wang, Yi C) Date: Wed, 11 Dec 2019 00:34:31 +0000 Subject: [Starlingx-discuss] Stx-openstack application doesn't take effect on extra compute nodes. In-Reply-To: References: Message-ID: Are there any errors in your armada log in /var/log/armada? I ever encountered an issue. The system showed my application had been applied. But in fact, armada ran into some errors. From: parkeryan(闫志杰) Sent: Tuesday, December 10, 2019 7:43 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Stx-openstack application doesn't take effect on extra compute nodes. Dear all, After deploying StarlingX 2.0 with stx-openstack in 2+2 mode, I added some more compute nodes, and labeled them with system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled system host-label-assign $NODE sriov=enabled After unlocking these extra compute nodes, I found the stx-openstack application didn’t deploy on them, even if I reapplied the stx-openstack application. Was there some action I should take to trigger the deploying? Here is the detailed log: [sysadmin at controller-0 ~(keystone_admin)]$ system application-list +---------------------+-----------------------------+-------------------------------+--------------------+---------+-----------+ | application | version | manifest name | manifest file | status | progress | +---------------------+-----------------------------+-------------------------------+--------------------+---------+-----------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applied | completed | | stx-openstack | 1.0-17-centos-stable-latest | armada-manifest | stx-openstack.yaml | applied | completed | +---------------------+-----------------------------+-------------------------------+--------------------+---------+-----------+ [sysadmin at controller-0 ~(keystone_admin)]$ [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | compute-0 | worker | unlocked | enabled | available | | 4 | compute-1 | worker | unlocked | enabled | available | | 5 | compute-2 | worker | unlocked | enabled | available | | 6 | compute-3 | worker | unlocked | enabled | available | | 7 | compute-4 | worker | unlocked | enabled | available | | 8 | compute-5 | worker | unlocked | enabled | available | | 9 | compute-6 | worker | unlocked | enabled | available | | 10 | compute-7 | worker | unlocked | enabled | available | | 11 | compute-8 | worker | unlocked | enabled | available | | 12 | compute-9 | worker | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-0 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-0 | openstack-compute-node | enabled | | compute-0 | openvswitch | enabled | | compute-0 | sriov | enabled | | compute-0 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-1 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-1 | openstack-compute-node | enabled | | compute-1 | openvswitch | enabled | | compute-1 | sriov | enabled | | compute-1 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-2 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-2 | openstack-compute-node | enabled | | compute-2 | openvswitch | enabled | | compute-2 | sriov | enabled | | compute-2 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-3 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-3 | openstack-compute-node | enabled | | compute-3 | openvswitch | enabled | | compute-3 | sriov | enabled | | compute-3 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-4 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-4 | openstack-compute-node | enabled | | compute-4 | openvswitch | enabled | | compute-4 | sriov | enabled | | compute-4 | sriovdp | enabled | +-----------+------------------------+-------------+ controller-0:~/sow$ openstack host list +-----------------------------------+-------------+----------+ | Host Name | Service | Zone | +-----------------------------------+-------------+----------+ | nova-consoleauth-67b4db556b-z6s6c | consoleauth | internal | | nova-consoleauth-67b4db556b-w7ztm | consoleauth | internal | | nova-conductor-76d979ff86-6l8ht | conductor | internal | | nova-scheduler-cd946798c-wkjpn | scheduler | internal | | nova-conductor-76d979ff86-l8s94 | conductor | internal | | nova-scheduler-cd946798c-vdmvb | scheduler | internal | | compute-1 | compute | nova | | compute-0 | compute | nova | +-----------------------------------+-------------+----------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From parkeryan at tencent.com Wed Dec 11 01:44:24 2019 From: parkeryan at tencent.com (=?utf-8?B?cGFya2VyeWFuKOmXq+W/l+adsCk=?=) Date: Wed, 11 Dec 2019 01:44:24 +0000 Subject: [Starlingx-discuss] Stx-openstack application doesn't take effect on extra compute nodes.(Internet mail) In-Reply-To: References: Message-ID: I am sorry to disturb you, and I think maybe it is due to lacking of patience. After one night, it became okay, and each compute node can be identified by openstack application, and VMs can also be deployed, it works, I think. controller-0:~$ openstack host list +-----------------------------------+-------------+----------+ | Host Name | Service | Zone | +-----------------------------------+-------------+----------+ | nova-consoleauth-67b4db556b-z6s6c | consoleauth | internal | | nova-consoleauth-67b4db556b-w7ztm | consoleauth | internal | | nova-conductor-76d979ff86-6l8ht | conductor | internal | | nova-scheduler-cd946798c-wkjpn | scheduler | internal | | nova-conductor-76d979ff86-l8s94 | conductor | internal | | nova-scheduler-cd946798c-vdmvb | scheduler | internal | | compute-1 | compute | nova | | compute-0 | compute | nova | | compute-3 | compute | nova | | compute-2 | compute | nova | | compute-7 | compute | nova | | compute-5 | compute | nova | | compute-4 | compute | nova | | compute-9 | compute | nova | | compute-6 | compute | nova | | compute-8 | compute | nova | +-----------------------------------+-------------+----------+ controller-0:~$ openstack server list +--------------------------------------+----------------+--------+-----------------------------+--------+--------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+----------------+--------+-----------------------------+--------+--------+ | e817723f-2a87-41cd-b877-2245c921e6da | leo-plane2-sw2 | ACTIVE | leo-mgmt-net=192.168.10.232 | leo_v2 | sim | | b17d9bca-34ae-4695-aa02-8972ba751774 | leo-plane2-sw1 | ACTIVE | leo-mgmt-net=192.168.10.240 | leo_v2 | sim | | 71a449e5-9bbd-42d8-96a4-5c8655d4680a | leo-plane1-sw2 | ACTIVE | leo-mgmt-net=192.168.10.98 | leo_v2 | sim | | 76d15983-8f91-4fe1-b45e-2551cd67089d | leo-plane1-sw1 | ACTIVE | leo-mgmt-net=192.168.10.178 | leo_v2 | sim | | 185bb3e9-c86c-4022-b752-4fbf03a816e6 | leo-pod2-tor4 | ACTIVE | leo-mgmt-net=192.168.10.238 | leo_v2 | sim | | ba54f234-2f7d-4fd5-bec5-6fe98dcbf934 | leo-pod2-tor3 | ACTIVE | leo-mgmt-net=192.168.10.63 | leo_v2 | sim | | ab8c7f87-4bca-47f7-ba55-99761f49f379 | leo-pod2-tor2 | ACTIVE | leo-mgmt-net=192.168.10.135 | leo_v2 | sim | | f1db761e-48d3-4727-93b1-2bdaf802fd49 | leo-pod2-tor1 | ACTIVE | leo-mgmt-net=192.168.10.16 | leo_v2 | sim | | 6ad274b4-fd33-45e4-ad62-d7e402b23e8a | leo-pod2-fab2 | ACTIVE | leo-mgmt-net=192.168.10.220 | leo_v2 | sim | | e5d34e60-930c-4ca2-b48f-5c1069e001f6 | leo-pod2-fab1 | ACTIVE | leo-mgmt-net=192.168.10.162 | leo_v2 | sim | | 1b40b14e-fd9a-4b51-8f5f-4f97c2f3d5e9 | leo-pod1-tor4 | ACTIVE | leo-mgmt-net=192.168.10.208 | leo_v2 | sim | | dedaf9d8-1064-4637-929a-235f272846fe | leo-pod1-tor3 | ACTIVE | leo-mgmt-net=192.168.10.247 | leo_v2 | sim | | c1279e60-3abd-46dc-9084-7f93673f4e30 | leo-pod1-tor2 | ACTIVE | leo-mgmt-net=192.168.10.105 | leo_v2 | sim | | ed14a773-496c-4b03-a900-c54e44596fcd | leo-pod1-tor1 | ACTIVE | leo-mgmt-net=192.168.10.221 | leo_v2 | sim | | 45b08f00-974a-4bc3-b7ba-f0d954660da2 | leo-pod1-fab2 | ACTIVE | leo-mgmt-net=192.168.10.139 | leo_v2 | sim | | dbc4873f-f946-441b-bf22-7c93a2052f41 | leo-pod1-fab1 | ACTIVE | leo-mgmt-net=192.168.10.43 | leo_v2 | sim | +--------------------------------------+----------------+--------+-----------------------------+--------+--------+ Sorry again, and thanks for your reply. 发件人: "Wang, Yi C" 日期: 2019年12月11日 星期三 08:34 收件人: "parkeryan(闫志杰)" , "starlingx-discuss at lists.starlingx.io" 主题: RE: [Starlingx-discuss] Stx-openstack application doesn't take effect on extra compute nodes.(Internet mail) Are there any errors in your armada log in /var/log/armada? I ever encountered an issue. The system showed my application had been applied. But in fact, armada ran into some errors. From: parkeryan(闫志杰) Sent: Tuesday, December 10, 2019 7:43 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Stx-openstack application doesn't take effect on extra compute nodes. Dear all, After deploying StarlingX 2.0 with stx-openstack in 2+2 mode, I added some more compute nodes, and labeled them with system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openvswitch=enabled system host-label-assign $NODE sriov=enabled After unlocking these extra compute nodes, I found the stx-openstack application didn’t deploy on them, even if I reapplied the stx-openstack application. Was there some action I should take to trigger the deploying? Here is the detailed log: [sysadmin at controller-0 ~(keystone_admin)]$ system application-list +---------------------+-----------------------------+-------------------------------+--------------------+---------+-----------+ | application | version | manifest name | manifest file | status | progress | +---------------------+-----------------------------+-------------------------------+--------------------+---------+-----------+ | platform-integ-apps | 1.0-7 | platform-integration-manifest | manifest.yaml | applied | completed | | stx-openstack | 1.0-17-centos-stable-latest | armada-manifest | stx-openstack.yaml | applied | completed | +---------------------+-----------------------------+-------------------------------+--------------------+---------+-----------+ [sysadmin at controller-0 ~(keystone_admin)]$ [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | compute-0 | worker | unlocked | enabled | available | | 4 | compute-1 | worker | unlocked | enabled | available | | 5 | compute-2 | worker | unlocked | enabled | available | | 6 | compute-3 | worker | unlocked | enabled | available | | 7 | compute-4 | worker | unlocked | enabled | available | | 8 | compute-5 | worker | unlocked | enabled | available | | 9 | compute-6 | worker | unlocked | enabled | available | | 10 | compute-7 | worker | unlocked | enabled | available | | 11 | compute-8 | worker | unlocked | enabled | available | | 12 | compute-9 | worker | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-0 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-0 | openstack-compute-node | enabled | | compute-0 | openvswitch | enabled | | compute-0 | sriov | enabled | | compute-0 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-1 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-1 | openstack-compute-node | enabled | | compute-1 | openvswitch | enabled | | compute-1 | sriov | enabled | | compute-1 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-2 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-2 | openstack-compute-node | enabled | | compute-2 | openvswitch | enabled | | compute-2 | sriov | enabled | | compute-2 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-3 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-3 | openstack-compute-node | enabled | | compute-3 | openvswitch | enabled | | compute-3 | sriov | enabled | | compute-3 | sriovdp | enabled | +-----------+------------------------+-------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-label-list compute-4 +-----------+------------------------+-------------+ | hostname | label key | label value | +-----------+------------------------+-------------+ | compute-4 | openstack-compute-node | enabled | | compute-4 | openvswitch | enabled | | compute-4 | sriov | enabled | | compute-4 | sriovdp | enabled | +-----------+------------------------+-------------+ controller-0:~/sow$ openstack host list +-----------------------------------+-------------+----------+ | Host Name | Service | Zone | +-----------------------------------+-------------+----------+ | nova-consoleauth-67b4db556b-z6s6c | consoleauth | internal | | nova-consoleauth-67b4db556b-w7ztm | consoleauth | internal | | nova-conductor-76d979ff86-6l8ht | conductor | internal | | nova-scheduler-cd946798c-wkjpn | scheduler | internal | | nova-conductor-76d979ff86-l8s94 | conductor | internal | | nova-scheduler-cd946798c-vdmvb | scheduler | internal | | compute-1 | compute | nova | | compute-0 | compute | nova | +-----------------------------------+-------------+----------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Wed Dec 11 07:11:18 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Wed, 11 Dec 2019 07:11:18 +0000 Subject: [Starlingx-discuss] layered building Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628CEB3@CDSMSX102.ccr.corp.intel.com> Hi Scott & sault Have you test out building each layer, as sync on last week building meeting? I could patch again for Saul's yesterday comment. https://etherpad.openstack.org/p/stx-build And what about the cenga update progress? There is thill 3 patch with no progress. https://review.opendev.org/#/c/681821/ https://review.opendev.org/#/c/688598/ https://review.opendev.org/#/c/695010/ BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Wed Dec 11 08:46:29 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Wed, 11 Dec 2019 08:46:29 +0000 Subject: [Starlingx-discuss] ceph ops enabling in sysinv-conductor In-Reply-To: <82EBE26D-C9F9-4425-B5AE-9FEF1E74BC65@windriver.com> References: <56829C2A36C2E542B0CCB9854828E4D85628A993@CDSMSX102.ccr.corp.intel.com> <56829C2A36C2E542B0CCB9854828E4D85628AC31@CDSMSX102.ccr.corp.intel.com> <82EBE26D-C9F9-4425-B5AE-9FEF1E74BC65@windriver.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628CF76@CDSMSX102.ccr.corp.intel.com> Hi Bob Some question, what’s storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z Cc: 'starlingx-discuss at lists.starlingx.io' ; Poncea, Ovidiu Subject: Re: ceph ops enabling in sysinv-conductor See inline… From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Wed Dec 11 09:09:08 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Wed, 11 Dec 2019 03:09:08 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <3cea8d$79pvh9@fmsmga002.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="r/stx.3.0" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Bill.Zvonar at windriver.com Wed Dec 11 12:30:27 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 11 Dec 2019 12:30:27 +0000 Subject: [Starlingx-discuss] Community Call (Dec 11, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007B3C666@ALA-MBD.corp.ad.wrs.com> Hi all, reminder of the Community call coming up later today. Feel free to add new topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20191211T1500 From scott.little at windriver.com Wed Dec 11 14:50:48 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 11 Dec 2019 09:50:48 -0500 Subject: [Starlingx-discuss] layered building In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D85628CEB3@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D85628CEB3@CDSMSX102.ccr.corp.intel.com> Message-ID: <70fea93e-6c72-31a4-56b2-5387a131f1ca@windriver.com> I have been able to give it some time in the last few days. I have a few modifications that I hope to test today.  If all goes well, a manual test using cengn tomorrow.  If that goes well, I'll post my updates, and start work on the cengn jenkins jobs Friday. Scott On 2019-12-11 2:11 a.m., Chen, Haochuan Z wrote: > > Hi Scott & sault > > Have you test out building each layer, as sync on last week building > meeting? I could patch again for Saul’s yesterday comment. > > https://etherpad.openstack.org/p/stx-build > > And what about the cenga update progress? > > There is thill 3 patch with no progress. > > https://review.opendev.org/#/c/681821/ > > https://review.opendev.org/#/c/688598/ > > https://review.opendev.org/#/c/695010/ > > BR! > > Martin, Chen > > SSP, Software Engineer > > 021-61164330 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Dec 11 15:22:25 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 11 Dec 2019 15:22:25 +0000 Subject: [Starlingx-discuss] How to modify management subnet_pool? In-Reply-To: <386BC956-9B56-44B5-9906-33AD1322F463@tencent.com> References: <386BC956-9B56-44B5-9906-33AD1322F463@tencent.com> Message-ID: Hi Parker: As command prompt , prefix of mgmt subnet could not be modified after cluster is setup. Thanks. BR Austin Sun. From: parkeryan(闫志杰) Sent: Wednesday, December 4, 2019 5:16 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to modify management subnet_pool? Hi, I have a problem while deploying StarlingX 2.0, and I found the management subnet-pool was not enough for context. Here is the log. [sysadmin at controller-1 ~(keystone_admin)]$ system host-update 12 personality=worker hostname=compute-7 Remote error: AddressPoolExhausted Address pool management has no available addresses [u'Traceback (most recent call last):\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 438, in _process_data\n **args)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1643, in configure_ihost\n self._configure_worker_host(context, host)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1505, in _configure_worker_host\n self._allocate_addresses_for_host(context, host)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1100, in _allocate_addresses_for_host\n address_name).address\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1055, in _allocate_pool_address\n interface_id, pool_uuid, address_name, dbapi=self.dbapi\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/api/controllers/v1/address_pool.py", line 416, in assign_address\n ip_address = cls.allocate_address(pool, dbapi)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/api/controllers/v1/address_pool.py", line 399, in allocate_address\n raise exception.AddressPoolExhausted(name=pool.name)\n', u'AddressPoolExhausted: Address pool management has no available addresses\n']. And I have to extend the management address pool. [sysadmin at controller-0 sow(keystone_admin)]$ system addrpool-show d6ed7b27-4037-42db-97c7-676256b1c883 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | uuid | d6ed7b27-4037-42db-97c7-676256b1c883 | | name | management | | network | 192.168.204.0 | | prefix | 28 | | order | random | | ranges | ['192.168.204.2-192.168.204.14'] | | floating_address | 192.168.204.2 | | controller0_address | 192.168.204.3 | | controller1_address | 192.168.204.4 | | gateway_address | None | +---------------------+--------------------------------------+ I am trying to extend the ranges by ‘system addrpool-modify’, but it reminds the prefix can only be modified during bootstrap phase, [sysadmin at controller-0 sow(keystone_admin)]$ system help addrpool-modify usage: system addrpool-modify [--name ] [--ranges ] [--order ] [--prefix ] Modify interface attributes. Positional arguments: UUID of IP address pool entry Optional arguments: --name Name of the Address Pool] --ranges The inclusive range of addresses to allocate [,,...] --order The allocation order within the start/end range --prefix CIDR prefix, only modifiable during bootstrap phase. Anybody can tell me how to make the controller node and the compute node into bootstrap phase or I should redeploy the whole context from start, thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Dec 11 16:02:47 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 11 Dec 2019 16:02:47 +0000 Subject: [Starlingx-discuss] MoM Agenda: Weekly StarlingX non-OpenStack distro meeting, 12/11/2019 Message-ID: Hi All: Thanks join the call , the MoM for 12/11 meeting: * stx.4.0 feature - Standardize Flock Package Versioning (JITStack-Daniels) ansible-playbook package patch is uploaded , currently 16 repos should be update. In general , 1 repo task / 1 week. https://review.opendev.org/#/c/698005/ under review. should keep tis patch version. Daniels's Team will fix it. - Kata Container (Shuicheng) enable POST method token fetch in registry token server , then the WA of containd was removed. shared patches list to Frank, EB will be tested. Shuicheng/GDC have already run sanity test. - CentOS 8.0 upgrade planning (Shuai Zhao) SRPM, 4 left tarball : 41 is compiled , left should be diffcult and more time and effort , 38 is not started. build-iso script , repoquery --whatprovides command does not support the second paremeters. Container Build : need continue improving commit message. * stx 3.0 bugs fix - CVE issue tracking (Shuicheng) * OVMF patches are merged stx.3.0 and master, stx.2.0 is depends on PR review. * kernel change upgrade 1062 performance concern about spectre patch , will be discussed on security meeting. - Storage issue tracking (Tingjie) 4-Medium https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.tags_combinator=ALL&field.tag=stx.3.0+stx.storage LP#1826886 cinder cmd not working intermittently ----Ma zheng incomplete state , need to check if reproduce recently LP#1844164 alarm 800.001 raised on lock storage-0 and not cleared when storage-0 unlocks --- Martin Patch is uploaded, under review. LP#1847336 IPv6 Distributed Cloud: ansible-playbook 'Wipe ceph osds' does not support re-play / re-entrance ---- Ovidiu verification can not reproduce in latest . LP#1848198 Glance backend present on non-openstack deployment ---- Stefan Dinescu Patch is uploaded - Others issue tracking (Austin) https://bugs.launchpad.net/starlingx/+bug/1847335 , need to find BIOS info the issue reported. Thanks. BR Austin Sun. From scott.little at windriver.com Wed Dec 11 16:17:28 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 11 Dec 2019 11:17:28 -0500 Subject: [Starlingx-discuss] Branch r/stx.3.0 is frozen Message-ID: Please treat branch r/stx.3.0 as frozen. No further commits should be merged into the r/stx.3.0 branch until further notice. A build of the 3.0.0 candidate load has been launched on CENGN. Thanks for your co-operation. Scott Little From Bill.Zvonar at windriver.com Wed Dec 11 16:43:26 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 11 Dec 2019 16:43:26 +0000 Subject: [Starlingx-discuss] Community Call (Dec 11, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007B3CA3A@ALA-MBD.corp.ad.wrs.com> >From today's meeting... Standing Topics - Gerrit Reviews in Need of Attention - GPU device plug in for stx.3.0 https://review.opendev.org/#/c/691699/ got only 1 of +2 --- desired but not critical for 3.0 -- can potentially go in stx.3.0 for a maintenance release https://review.opendev.org/#/c/696241/ document --- it's doc, so can go in master - Call for review on patches: - https://review.opendev.org/#/c/696035/1 --- changed priority to High so it'll get cherry-picked - https://review.opendev.org/#/c/688320/3 --- change to High as well - https://review.opendev.org/#/c/692276/ --- not critical for today, but should be cherry-picked once it's merged (to 3.0 and 2.0) - For CVE Fix https://github.com/starlingx-staging/stx-nova/pull/27 --- this will be in the 2.0 mtc release (not applicable to 3.0) - CentOS-8 reviews - sgw --- just a PSA to keep reviewing - Sanity: any RED since last week? - none - Unanswered Requests for Help on Mailing List - Tx Timestamp issue (Philip Wang): http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007173.html - Bill to ask Alex Kozyrev to weigh in on this - Management Subnet Pool (Parker Yan): http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007176.html - Austin responded - Baremetal Installation Error (Teshan Senaratne): http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007190.html - Yong will ask the storage team to look into this - SDA & NVME (Erich): http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007192.html - Yong will ask the storage guys about this too - Configuring Openstack SSL Certificates (Austin Gillmann): http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007212.html - Yong will look into this This Week's Topics - stx.3.0 Status - Test Status - Sanity is green on r/stx.3.0 loads - Regression: https://docs.google.com/spreadsheets/d/1X085xI96M6PIeum87w6IEF11G-TG-W1OeGuAJWnrAWI/edit#gid=1717644237 - at 95% - Bugs - Total: 35 - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.3.0 - Critical/High: 14 - https://bugs.launchpad.net/starlingx/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.importance%3Alist=CRITICAL&field.importance%3Alist=HIGH&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=stx.3.0+&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on - These will be left as gating to be resolved in an upcoming maintenance release - Medium: 21 - https://bugs.launchpad.net/starlingx/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.importance%3Alist=MEDIUM&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=stx.3.0&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search - Request each PL to review and decide if any should be changed to High priority so that they are cherrypicked in a subsequent maintenance release - AR: PLs Target Finishing this scrub by the next community call - Dec 18 - After Dec 18, Ghada will update the Medium priority bugs as follows: - Medium >= 100 days that are not reproduced recently will be marked as Low priority / no target release - Medium < 100 days will be moved to stx.4.0 - Candidate Final Compile planned for today - Commits to wait for before requesting branch freeze for tagging and build: - https://review.opendev.org/#/q/topic:bug/1855915+(status:open+OR+status:merged) >> in master and needs to be cherrypicked - Will trigger a build once the above commits are merged in r/stx.3.0, including a docker image build - Release 2.0.2 bug status: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.2.0 --- for next week - User Docs Process - goal is to make the process of contribution as easy as possible for the developers - contribute the way that works for you - email, review, whatever - they'll sort out where the documentation goes, regardless of how it's provided - Ildiko commented that this isn't scalable - we agreed that we'll refine as we go to make things more scalable - Kristal concurred with Ildiko about making this part of the development process - IRC - why aren't we using it much? - meetbot? - Slack users guide to IRC clients? - depends on which client you use - Ildiko uses IRC Cloud - CENGN hosted and/or built QCOW images? - ran out of time, we'll discuss this offline -----Original Message----- From: Zvonar, Bill Sent: Wednesday, December 11, 2019 7:30 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (Dec 11, 2019) Hi all, reminder of the Community call coming up later today. Feel free to add new topics to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20191211T1500 From dtroyer at gmail.com Wed Dec 11 17:14:54 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 11 Dec 2019 11:14:54 -0600 Subject: [Starlingx-discuss] Community Call (Dec 11, 2019) In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007B3CA3A@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007B3CA3A@ALA-MBD.corp.ad.wrs.com> Message-ID: <5cc49c7b-52a8-034d-506c-c80a074f0718@gmail.com> On 12/11/19 10:43 AM, Zvonar, Bill wrote: > - IRC > - why aren't we using it much? > - meetbot? > - Slack users guide to IRC clients? > - depends on which client you use - Ildiko uses IRC Cloud A quick follow-up on this: * OpenStack has a guide to setting up IRC in their Contributor's Guide[0] and a short overview of IRC use[1] that includes a couple of links to resources like the IRC logs[2] and conventions. Note that all OpenDev (née OpenStack Infra) managed channels require registered nicknames[3] to join. * IRCCloud[4] is the web-based client Ildiko mentioned. Some of us use Pidgin on Linux and/or Adium on OS/X for graphical clients. There are many other clients, many people stick to pure text-mode and use weechat or irssi in a screen or tmux session as mentioned in the doc above. * OpenDev operates MeetBot[5] in a number of channels (#openstack-meeting*) to assist in summarizing meetings in the logs. If we think it would be useful to have in #starlingx that can be arranged. dt [0] https://docs.openstack.org/contributors/common/irc.html [1] https://docs.openstack.org/infra/manual/irc.html, additional IRC operations are documented in https://docs.openstack.org/infra/system-config/irc.html [2] http://eavesdrop.openstack.org/ [3] https://freenode.net/kb/answer/registration [4] https://www.irccloud.com/ [5] https://wiki.debian.org/MeetBot -- Dean Troyer dtroyer at gmail.com From scott.little at windriver.com Wed Dec 11 17:39:31 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 11 Dec 2019 12:39:31 -0500 Subject: [Starlingx-discuss] Branch r/stx.3.0 is frozen In-Reply-To: References: Message-ID: <41aceaa0-9a20-eb18-a5bc-a3d1d0654144@windriver.com> ETA for the r/3.0 build is 6 pm EST.     11 pm UTC. Scott On 2019-12-11 11:17 a.m., Scott Little wrote: > Please treat branch r/stx.3.0 as frozen. > > No further commits should be merged into the r/stx.3.0 branch until > further notice. > > A build of the 3.0.0 candidate load has been launched on CENGN. > > Thanks for your co-operation. > > Scott Little > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Dec 11 17:40:46 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 11 Dec 2019 09:40:46 -0800 Subject: [Starlingx-discuss] Centos8 Feature branches getting updated Message-ID: Folks, Just a heads up, I will be sending reviews to merge in the latest master to the feature branches inorder to keep them up todate. This may require some of the pending changes to be rebased. This will have more of affect in the root and tools repo. Sau! From scott.little at windriver.com Wed Dec 11 17:43:43 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 11 Dec 2019 12:43:43 -0500 Subject: [Starlingx-discuss] Branch r/stx.3.0 is frozen In-Reply-To: <41aceaa0-9a20-eb18-a5bc-a3d1d0654144@windriver.com> References: <41aceaa0-9a20-eb18-a5bc-a3d1d0654144@windriver.com> Message-ID: <1f0eb4f2-9d39-adfc-61c9-f454b96fe8e6@windriver.com> Revised ETA for the r/3.0 build is 6:45 pm EST.     11:45 pm UTC. Scott From cristopher.j.lemus.contreras at intel.com Wed Dec 11 18:06:04 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Wed, 11 Dec 2019 12:06:04 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <0eeb65$7slobr@FMSMGA003.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191211T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Ghada.Khalil at windriver.com Wed Dec 11 23:47:57 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 11 Dec 2019 23:47:57 +0000 Subject: [Starlingx-discuss] stx.3.0 Bugs -- PL Follow-up Actions Message-ID: <151EE31B9FCCA54397A757BC674650F0C1603D50@ALA-MBD.corp.ad.wrs.com> Hello StarlingX PLs/TLs, As per the community meeting today (12/11), please complete the following follow-up action for the next community call (12/18). Critical/High Bugs - https://bugs.launchpad.net/starlingx/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.importance%3Alist=CRITICAL&field.importance%3Alist=HIGH&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=stx.3.0+&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on - These will be left as gating to be resolved in an upcoming 3.0 maintenance release Medium Bugs - https://bugs.launchpad.net/starlingx/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.importance%3Alist=MEDIUM&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=stx.3.0&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search - AR: PLs to review and decide if any should be changed to High priority so that they are cherry-picked in a subsequent 3.0 maintenance release (only High bugs are targeted for cherrypicking) o PLs should go ahead and make the priority change, adding a note with the reasoning in the LP. - AR Target Date: Next community call - Dec 18 - After Dec 18, Ghada will update the Medium priority bugs as follows: o Medium >= 100 days that are not reproduced recently will be marked as Low priority / no target release o Medium < 100 days will be moved to stx.4.0 Thanks, Ghada -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Dec 11 23:50:21 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 11 Dec 2019 23:50:21 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191211 Message-ID: Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-11 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Dec 11 23:55:14 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 11 Dec 2019 23:55:14 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191211 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-11 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [BLOCKED] Sanity Platform 07 TCs [BLOCKED] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== BUG: App platform-integ-apps failed to apply https://bugs.launchpad.net/starlingx/+bug/1856078 regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Dec 12 00:04:14 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 12 Dec 2019 00:04:14 +0000 Subject: [Starlingx-discuss] Help wanted: stx.2.0 cherry-picks Message-ID: <151EE31B9FCCA54397A757BC674650F0C1603DC0@ALA-MBD.corp.ad.wrs.com> Hello, Are you new to the StarlingX community and would like to take on a simple task to get familiar with the workflow for merging code? If so, I am looking for help with porting/cherry-picking a number of CVEs from master to the r/stx.2.0 branch. Note: Some re-work maybe required as the repo's have been re-organized in master. Please reach out to me if you would like to take on any of these. I am targeting a time-line up to the middle of January, but would like to start asap. Thanks, Ghada List of items: - LP: https://bugs.launchpad.net/starlingx/+bug/1849197 - https://review.opendev.org/#/c/695984/ - https://review.opendev.org/#/c/695983/ - LP: https://bugs.launchpad.net/starlingx/+bug/1849195 & https://bugs.launchpad.net/starlingx/+bug/1849203 - https://review.opendev.org/#/c/695775/ - LP: https://bugs.launchpad.net/starlingx/+bug/1849210 - https://review.opendev.org/#/c/695742/ - LP: https://bugs.launchpad.net/starlingx/+bug/1849201 - https://review.opendev.org/#/c/695741/ - LP: https://bugs.launchpad.net/starlingx/+bug/1849202 - https://review.opendev.org/#/c/695740/ - LP: https://bugs.launchpad.net/starlingx/+bug/1849200 - https://review.opendev.org/#/c/695579/ - https://review.opendev.org/#/c/695582/ - https://review.opendev.org/#/c/695560/ Thanks, Ghada From parkeryan at tencent.com Thu Dec 12 01:52:42 2019 From: parkeryan at tencent.com (=?utf-8?B?cGFya2VyeWFuKOmXq+W/l+adsCk=?=) Date: Thu, 12 Dec 2019 01:52:42 +0000 Subject: [Starlingx-discuss] =?utf-8?b?562U5aSNOiBIb3cgdG8gbW9kaWZ5IG1h?= =?utf-8?q?nagement_subnet=5Fpool=3F=28Internet_mail=29?= In-Reply-To: References: <386BC956-9B56-44B5-9906-33AD1322F463@tencent.com> Message-ID: <412c444c1b8444f5a8f3a6619f743b77@tencent.com> Hi Austin: Yes, there is no way to modify the prefix of mgmt. subnet cloud after unlocking controller-0, and I have deployed the whole context. Thanks for your reply. Best regards parkeryan 发件人: Sun, Austin 发送时间: 2019年12月11日 23:22 收件人: parkeryan(闫志杰) ; starlingx-discuss at lists.starlingx.io 主题: RE: How to modify management subnet_pool?(Internet mail) Hi Parker: As command prompt , prefix of mgmt subnet could not be modified after cluster is setup. Thanks. BR Austin Sun. From: parkeryan(闫志杰) > Sent: Wednesday, December 4, 2019 5:16 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to modify management subnet_pool? Hi, I have a problem while deploying StarlingX 2.0, and I found the management subnet-pool was not enough for context. Here is the log. [sysadmin at controller-1 ~(keystone_admin)]$ system host-update 12 personality=worker hostname=compute-7 Remote error: AddressPoolExhausted Address pool management has no available addresses [u'Traceback (most recent call last):\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 438, in _process_data\n **args)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n result = getattr(proxyobj, method)(ctxt, **kwargs)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1643, in configure_ihost\n self._configure_worker_host(context, host)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1505, in _configure_worker_host\n self._allocate_addresses_for_host(context, host)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1100, in _allocate_addresses_for_host\n address_name).address\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1055, in _allocate_pool_address\n interface_id, pool_uuid, address_name, dbapi=self.dbapi\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/api/controllers/v1/address_pool.py", line 416, in assign_address\n ip_address = cls.allocate_address(pool, dbapi)\n', u' File "/usr/lib64/python2.7/site-packages/sysinv/api/controllers/v1/address_pool.py", line 399, in allocate_address\n raise exception.AddressPoolExhausted(name=pool.name)\n', u'AddressPoolExhausted: Address pool management has no available addresses\n']. And I have to extend the management address pool. [sysadmin at controller-0 sow(keystone_admin)]$ system addrpool-show d6ed7b27-4037-42db-97c7-676256b1c883 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | uuid | d6ed7b27-4037-42db-97c7-676256b1c883 | | name | management | | network | 192.168.204.0 | | prefix | 28 | | order | random | | ranges | ['192.168.204.2-192.168.204.14'] | | floating_address | 192.168.204.2 | | controller0_address | 192.168.204.3 | | controller1_address | 192.168.204.4 | | gateway_address | None | +---------------------+--------------------------------------+ I am trying to extend the ranges by ‘system addrpool-modify’, but it reminds the prefix can only be modified during bootstrap phase, [sysadmin at controller-0 sow(keystone_admin)]$ system help addrpool-modify usage: system addrpool-modify [--name ] [--ranges ] [--order ] [--prefix ] Modify interface attributes. Positional arguments: UUID of IP address pool entry Optional arguments: --name Name of the Address Pool] --ranges The inclusive range of addresses to allocate [,,...] --order The allocation order within the start/end range --prefix CIDR prefix, only modifiable during bootstrap phase. Anybody can tell me how to make the controller node and the compute node into bootstrap phase or I should redeploy the whole context from start, thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuyong at neusoft.com Thu Dec 12 07:17:18 2019 From: fuyong at neusoft.com (fuyong) Date: Thu, 12 Dec 2019 15:17:18 +0800 Subject: [Starlingx-discuss] Problems build-iso in centos8 environment Message-ID: <000001d5b0bc$309ddd10$91d99730$@neusoft.com> Hi StarlingX team I’m upgrading starlingx to centos8 and have a question to ask for help. problem: When I run the build-iso command in the centos8 build container environment, I find that the build-tools / build_iso / cgts_deps.sh script runs with some issues. 1. I cannot add multiple parameters after the “repoquery –whatprovides” command. 2. Unable to query some dependencies through “repoquery –whatprovides” command. Eg1: In the centos7 build container environment, the following commands can be executed normally. You can see that multiple parameters can be added after repoquery –whatprovides Centos7: [lymtics at ba0d929a1011 /]$ repoquery -c /localdisk/loadbuild/lymtics/starlingx/export/yum.conf --repoid=TisCentos7Distro --arch=x86_64,noarch --whatprovides 'libattr.so.1()(64bit)' 'libc.so.6()(64bit)' 'libc.so.6(GLIBC_2.14)(64bit)' 'libc.so.6(GLIBC_2.2.5)(64bit)' 'libc.so.6(GLIBC_2.3.4)(64bit)' 'libc.so.6(GLIBC_2.3)(64bit)' 'libc.so.6(GLIBC_2.4)(64bit)' 'rtld(GNU_HASH)' '--qf=%{name}' libattr glibc glibc glibc glibc glibc glibc glibc But it is invalid in the centos8 build container environment. centos8: [lymtics at aa4a04b2f1f7 export]$ repoquery -c /localdisk/loadbuild/lymtics/starlingx/export/yum.conf --repoid=TisCentos8Distro --arch=x86_64,noarch --whatprovides 'libattr.so.1()(64bit)' 'libc.so.6()(64bit)' 'libc.so.6(GLIBC_2.14)(64bit)' 'libc.so.6(GLIBC_2.2.5)(64bit)' 'libc.so.6(GLIBC_2.3.4)(64bit)' 'libc.so.6(GLIBC_2.3)(64bit)' 'libc.so.6(GLIBC_2.4)(64bit)' 'rtld(GNU_HASH)' '--qf=%{name}' Last metadata expiration check: 0:09:25 ago on Wed 11 Dec 2019 03:09:45 AM UTC. If I query one parameter at a time, I can query the results correctly in centos8. Eg2: In the centos7 build container environment, the following commands can be executed normally. [lymtics at ba0d929a1011 export]$ repoquery -c /localdisk/loadbuild/lymtics/starlingx/export/yum.conf --repoid=TisCentos7Distro --arch=x86_64,noarch --whatprovides libacl = 2.2.51-14.el7 '--qf=%{name}' libacl But it is invalid in the centos8 build container environment. centos8: [lymtics at 07e4fd4ae0c8 export]$ repoquery -c /localdisk/loadbuild/lymtics/starlingx/export/yum.conf --repoid=TisCentos8Distro --arch=x86_64,noarch --whatprovides libacl = 2.2.53-1.el8 '--qf=%{name}' Last metadata expiration check: 0:03:40 ago on Wed 11 Dec 2019 03:41:17 AM UTC. please contact me, If you have any good suggestions. Thank you Best Regards Wish you happy everyday! -------------------------------- Yong.Fu- Neusoft --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Thu Dec 12 09:02:18 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Thu, 12 Dec 2019 03:02:18 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <029d15$66bpr8@orsmga008.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="r/stx.3.0" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Matt.Peters at windriver.com Thu Dec 12 13:34:26 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 12 Dec 2019 13:34:26 +0000 Subject: [Starlingx-discuss] data-network down In-Reply-To: References: <74C7C868-5BA2-4373-BE3F-D0ADE2BC18EF@windriver.com> Message-ID: Hi, You can examine the logs from the OVS container. kubectl -n openstack get pods -o wide | grep openvswitch-vswitchd kubectl -n openstack logs From: "von Hoesslin, Volker" Date: Tuesday, December 10, 2019 at 2:54 AM To: "Peters, Matt" , "'starlingx-discuss at lists.starlingx.io'" Subject: AW: [Starlingx-discuss] data-network down Hi, my OVS is configured like the recommendation from the installations guide: system vswitch_type=none the lldp discovery shows currently the expected behavior, I will recheck on next error occurs. Btw, where can see OVS error logs, I’m not sure witch one of these billions of logs I can focus on for OVS errors… Volker. Von: Peters, Matt [mailto:Matt.Peters at windriver.com] Gesendet: Montag, 9. Dezember 2019 19:38 An: von Hoesslin, Volker; 'starlingx-discuss at lists.starlingx.io' Betreff: Re: [Starlingx-discuss] data-network down Hello, Are you running with containerized OVS (system vswitch_type=none) or OVS-DPDK on the host (system vswitch_type=ovs-dpdk)? In either case, I would examine the logs from OVS to see if there are any errors being generated for the host that is exhibiting the failure. In addition, you can check your infrastructure data network configuration to ensure it is setup as you expected by reviewing the LLDP information for the host (system host-lldp-neighbor-list) or the graphical display on the dashboard. -Matt From: "von Hoesslin, Volker" Date: Monday, December 9, 2019 at 4:34 AM To: "'starlingx-discuss at lists.starlingx.io'" Subject: [Starlingx-discuss] data-network down Hi, my STX 2.0 cluster is up and running (2x controller, 2x worker, 3x storage). but one of my worker have an problem, i can not fix it. after time (i do not know in detail, in most times over night) the data-network from all VMs are broken down, no connection over data-network are available any more. the other worker is running fine without any problems. if i live migrate the VMs from one to another worker and back again, the data-network is comming up and all is fine again. i do not find any logs that will explain this problem, any suggestions or hints for locking for? greez & thx, volker. Senden -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Thu Dec 12 14:59:23 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Thu, 12 Dec 2019 14:59:23 +0000 Subject: [Starlingx-discuss] [TSC] Minutes of 12/5 TSC meeting Message-ID: <1EB340B0-8593-451C-9735-FE1CDFA13E47@windriver.com> Hi all; Here are a few notes from the TSC meeting last week: Project confirmation - next steps - move to a standing topic Test in the open - raise visibility at community call Unit, Functional, ... Increase coverage and culture of unit test delivery with code Make it required - update for new code bug fixes - need a unit test as well - Leverage Pytest framework more and do more in the open with virtual environments vs bare metal Community involvement - capture where we are and where we want to go Contributor diversity User growth and adoption - gather some stat bitergia - to see where we are at - Ian to gather some data Leverage WeChat readout from China community Summary going out weekly to ML Documentation improvements, contributor guides, development process - increase visibility to help user and dev adoption Project mission/vision Just a check point to ensure we are all still aligned https://etherpad.openstack.org/p/stx-mission-statement StarlingX listed as supporting community for 5th ETSI NFV Plugtest (ildikov) https://www.etsi.org/events/1550-nfv-plugtests-4 https://www.etsi.org/technologies/nfv/nfv-plugtests-programme NFV and Edge - can we get community members to attend/participate Can we contribute to the test suite as well? Are there community members planning to attend this event? Regards; Ian From build.starlingx at gmail.com Thu Dec 12 16:04:21 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 12 Dec 2019 11:04:21 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_BUILD_3.0 - Build # 18 - Failure! Message-ID: <8192514.449.1576166662352.JavaMail.javamailuser@localhost> Project: STX_BUILD_3.0 Build #: 18 Status: Failure Timestamp: 20191212T031052Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191212T031052Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From build.starlingx at gmail.com Thu Dec 12 16:29:47 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 12 Dec 2019 11:29:47 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_BUILD_3.0 - Build # 19 - Still Failing! In-Reply-To: <787889716.447.1576166659147.JavaMail.javamailuser@localhost> References: <787889716.447.1576166659147.JavaMail.javamailuser@localhost> Message-ID: <2043190246.452.1576168188155.JavaMail.javamailuser@localhost> Project: STX_BUILD_3.0 Build #: 19 Status: Still Failing Timestamp: 20191212T160527Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191212T160527Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From scott.little at windriver.com Thu Dec 12 16:30:53 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 12 Dec 2019 11:30:53 -0500 Subject: [Starlingx-discuss] Branch r/stx.3.0 is frozen In-Reply-To: <1f0eb4f2-9d39-adfc-61c9-f454b96fe8e6@windriver.com> References: <41aceaa0-9a20-eb18-a5bc-a3d1d0654144@windriver.com> <1f0eb4f2-9d39-adfc-61c9-f454b96fe8e6@windriver.com> Message-ID: <8dcb0328-5239-d173-cb00-69a828e365e8@windriver.com> The r/3.0 branch build hung yesterday. I have relaunched it, and will monitor it more closely today. Scott On 2019-12-11 12:43 p.m., Scott Little wrote: > Revised ETA for the r/3.0 build is 6:45 pm EST.     11:45 pm UTC. > > Scott > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Thu Dec 12 17:21:42 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 12 Dec 2019 12:21:42 -0500 Subject: [Starlingx-discuss] Branch r/stx.3.0 is frozen In-Reply-To: <8dcb0328-5239-d173-cb00-69a828e365e8@windriver.com> References: <41aceaa0-9a20-eb18-a5bc-a3d1d0654144@windriver.com> <1f0eb4f2-9d39-adfc-61c9-f454b96fe8e6@windriver.com> <8dcb0328-5239-d173-cb00-69a828e365e8@windriver.com> Message-ID: <7a9fcaa4-1efa-6840-a2db-e465d019e3f5@windriver.com> We have passed the point of the prior hang. ETA is 7:30 EST,      12:30 am UTC Scott On 2019-12-12 11:30 a.m., Scott Little wrote: > The r/3.0 branch build hung yesterday. > > I have relaunched it, and will monitor it more closely today. > > Scott > > > On 2019-12-11 12:43 p.m., Scott Little wrote: >> Revised ETA for the r/3.0 build is 6:45 pm EST.     11:45 pm UTC. >> >> Scott >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Thu Dec 12 17:30:03 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 12 Dec 2019 17:30:03 +0000 Subject: [Starlingx-discuss] Weekly StarlingX Release meeting Message-ID: <151EE31B9FCCA54397A757BC674650F0C1604246@ALA-MBD.corp.ad.wrs.com> ** Reducing to 30mins since we'll just focus on any final items for stx.3.0 Weekly meeting on Thursday 11AM PT / 1900 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1828 bytes Desc: not available URL: From Ian.Jolliffe at windriver.com Thu Dec 12 21:51:21 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Thu, 12 Dec 2019 21:51:21 +0000 Subject: [Starlingx-discuss] [TSC] Minutes from 12/12 TSC meeting Message-ID: <143E35E5-327C-4C9F-9424-E03420CC2A38@windriver.com> Hi all; 12/12/2019: Great progress on R3 - thanks to all the great work going on in the community OSF annual report - input needed https://etherpad.openstack.org/p/2019_StarlingX_Annual_Update Project confirmation: Story outline: Timeline chart - Major inflection points and highlights Maybe dashboard on criteria - see Shanghai update Supporting info for the criteria to back up our assessment Info on contributors Info on test initiatives leverage Shanghai chart deck. Virtual hack-a-thon - for test? - Currently proposed for week of Jan 13th. ======================== Can we have a virtual hack-a-thon to improve test on StarlingX from Unit test to System testing Tools we can or could leverage: - Potentially some video link ups - to make it feel like people are in the same room - Video Zoom sessions One channel open all the time - special IRC focus Shark week concept - pick something - and do a demo at end of the week could turn into new initiatives a week is probably needed - or a contiguous block of time for people to focus on these items Can we do something like this in early January? A good way to build some momentum, remove barriers and kick-start. Figure a way for various time zone to follow and stay in synch. Needs some planning to start it up - people need to pick topics, to avoid collisions and overlap Figure out good times to synch up as required as well. Get some input on the mailing list from the community. Get some dates - NA/China holidays - potential dates - are the first 2 weeks of January possible? Proposed dates the week of *January 13th*. Please reply if you are interested in participating in this *hack-a-thon*. This will help us assess interest and figure out how to plan this activity. Let's try and lock in on this by next week's TSC call. Regards; Ian From sgw at linux.intel.com Thu Dec 12 22:27:39 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 12 Dec 2019 14:27:39 -0800 Subject: [Starlingx-discuss] Problems build-iso in centos8 environment In-Reply-To: <000001d5b0bc$309ddd10$91d99730$@neusoft.com> References: <000001d5b0bc$309ddd10$91d99730$@neusoft.com> Message-ID: <903f57f8-1354-c22d-7119-4a177db650c5@linux.intel.com> On 12/11/19 11:17 PM, fuyong wrote: > *Hi StarlingX team* > > I’m upgrading starlingx to centos8 and have a question to ask for help. > > *problem:* > > *When I run the build-iso command in the centos8 build container > environment, I find that the build-tools / build_iso / cgts_deps.sh > script runs with some issues.* > > 1.I cannot add multiple parameters after the “repoquery –whatprovides” > command. > > 2.Unable to query some dependencies through “repoquery –whatprovides” > command. > Hi Fuyong: First off, as I have mentioned before DNF replaced YUM and associated commands, which repoquery is one of. It's likely that the DNF variation is not exactly the same, so your seeing this change in behavior. Depending on the usage in the build tools scripts, we might have to account for that change and modify the scripts. Sau! > *Eg1:* > > *In the centos7 build container environment, the following commands can > be executed normally. You can see that multiple parameters can be added > after repoquery –whatprovides* > > *Centos7:* > > [lymtics at ba0d929a1011 /]$ repoquery -c > /localdisk/loadbuild/lymtics/starlingx/export/yum.conf > --repoid=TisCentos7Distro --arch=x86_64,noarch --whatprovides > 'libattr.so.1()(64bit)' 'libc.so.6()(64bit)' > 'libc.so.6(GLIBC_2.14)(64bit)' 'libc.so.6(GLIBC_2.2.5)(64bit)' > 'libc.so.6(GLIBC_2.3.4)(64bit)' 'libc.so.6(GLIBC_2.3)(64bit)' > 'libc.so.6(GLIBC_2.4)(64bit)' 'rtld(GNU_HASH)' '--qf=%{name}' > > libattr > > glibc > > glibc > > glibc > > glibc > > glibc > > glibc > > glibc > > *But it is invalid in the centos8 build container environment.* > > *centos8:* > > [lymtics at aa4a04b2f1f7 export]$ repoquery -c > /localdisk/loadbuild/lymtics/starlingx/export/yum.conf > --repoid=TisCentos8Distro --arch=x86_64,noarch --whatprovides > 'libattr.so.1()(64bit)' 'libc.so.6()(64bit)' > 'libc.so.6(GLIBC_2.14)(64bit)' 'libc.so.6(GLIBC_2.2.5)(64bit)' > 'libc.so.6(GLIBC_2.3.4)(64bit)' 'libc.so.6(GLIBC_2.3)(64bit)' > 'libc.so.6(GLIBC_2.4)(64bit)' 'rtld(GNU_HASH)' '--qf=%{name}' > > Last metadata expiration check: 0:09:25 ago on Wed 11 Dec 2019 03:09:45 > AM UTC. > > *If I query one parameter at a time, I can query the results correctly > in centos8.* > > ** > > *Eg2:* > > *In the centos7 build container environment, the following commands can > be executed normally.* > > [lymtics at ba0d929a1011 export]$ repoquery -c > /localdisk/loadbuild/lymtics/starlingx/export/yum.conf > --repoid=TisCentos7Distro --arch=x86_64,noarch --whatprovides *libacl = > 2.2.51-14.el7* '--qf=%{name}' > libacl** > > ** > > *But it is invalid in the centos8 build container environment.* > > *centos8:* > > [lymtics at 07e4fd4ae0c8 export]$ repoquery -c > /localdisk/loadbuild/lymtics/starlingx/export/yum.conf > --repoid=TisCentos8Distro --arch=x86_64,noarch --whatprovides *libacl = > 2.2.53-1.el8* '--qf=%{name}' > Last metadata expiration check: 0:03:40 ago on Wed 11 Dec 2019 03:41:17 > AM UTC. > > ** > > ** > > please contact me, If you have any good suggestions. > > Thank you > > ** > > Best Regards > > Wish you happy everyday! > > -------------------------------- > > Yong.Fu- Neusoft > > > --------------------------------------------------------------------------------------------------- > Confidentiality Notice: The information contained in this e-mail and any > accompanying attachment(s) > is intended only for the use of the intended recipient and may be > confidential and/or privileged of > Neusoft Corporation, its subsidiaries and/or its affiliates. If any > reader of this communication is > not the intended recipient, unauthorized use, forwarding, printing, > storing, disclosure or copying > is strictly prohibited, and may be unlawful.If you have received this > communication in error,please > immediately notify the sender by return e-mail, and delete the original > message and all copies from > your system. Thank you. > --------------------------------------------------------------------------------------------------- > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From bruce.e.jones at intel.com Fri Dec 13 00:40:49 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 13 Dec 2019 00:40:49 +0000 Subject: [Starlingx-discuss] customer request - backup and restore for VMs? Message-ID: <9A85D2917C58154C960D95352B22818BED38CDAC@fmsmsx123.amr.corp.intel.com> I am working on responding to a customer requirements list. What are the current capabilities for backup and restore for VM (OpenStack) guests in StarlingX? Is it available? If so, does it use standard Ceph / Cinder commands, or are there StarlingX commands for it? Thank you! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From vivian.zhu at intel.com Fri Dec 13 00:48:48 2019 From: vivian.zhu at intel.com (Zhu, Vivian) Date: Fri, 13 Dec 2019 00:48:48 +0000 Subject: [Starlingx-discuss] customer request - backup and restore for VMs? In-Reply-To: <9A85D2917C58154C960D95352B22818BED38CDAC@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BED38CDAC@fmsmsx123.amr.corp.intel.com> Message-ID: <371DF9A763E9F44F924F4A821FC070264DBA4DED@SHSMSX105.ccr.corp.intel.com> Bruce, OpenStack has the "Rebuild Instance" feature, it can rebuild the VM from snapshot. So the feature to allow to restore the cell system to a defined snapshot and backup is supported. Thanks! - Vivian SSP NST Storage Tel: (8621)61167437 From: Jones, Bruce E Sent: Friday, December 13, 2019 8:41 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] customer request - backup and restore for VMs? I am working on responding to a customer requirements list. What are the current capabilities for backup and restore for VM (OpenStack) guests in StarlingX? Is it available? If so, does it use standard Ceph / Cinder commands, or are there StarlingX commands for it? Thank you! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Dec 13 02:06:12 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 13 Dec 2019 02:06:12 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - Dec 12/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0C16045DC@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases stx.3.0 - No build yet as per Scott's email. Current ETA is 7:30pm Eastern - Test team is waiting for the new build (monitored by Jenkins job) to start sanity - Once sanity is green, will mark the build as released and declare the release milestone - Need to keep an eye on the sanity issue that was reported in master: https://bugs.launchpad.net/starlingx/+bug/1856078 - Currently this was not seen on the rc 3.0 build - Will monitor to see if this appears in the next sanity on master or on the next 3.0 load Regards, Ghada From cristopher.j.lemus.contreras at intel.com Fri Dec 13 09:04:34 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Fri, 13 Dec 2019 03:04:34 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <029d15$66nont@orsmga008.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="r/stx.3.0" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From scott.little at windriver.com Fri Dec 13 16:53:37 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 13 Dec 2019 11:53:37 -0500 Subject: [Starlingx-discuss] Branch r/stx.3.0 is frozen In-Reply-To: <7a9fcaa4-1efa-6840-a2db-e465d019e3f5@windriver.com> References: <41aceaa0-9a20-eb18-a5bc-a3d1d0654144@windriver.com> <1f0eb4f2-9d39-adfc-61c9-f454b96fe8e6@windriver.com> <8dcb0328-5239-d173-cb00-69a828e365e8@windriver.com> <7a9fcaa4-1efa-6840-a2db-e465d019e3f5@windriver.com> Message-ID: <14b9ae53-736b-b65c-e892-cf837276501a@windriver.com> The intended build was a success http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191212T162958Z I see that I failed the discontinue nightly builds, so we have one additional build timestamped 20191213T023000Z.  Happily it's changelog is empty, so there is no content difference if you used that one explicitly, or through the latest-build link. I have disabled the nightly build of r/3.0, and will return the master branch build to it's original time slot.  Further builds of r/3.0 will be by request only. I'll tag and publish 3.0.0 when we get a favorable result from testing. Scott On 2019-12-12 12:21 p.m., Scott Little wrote: > We have passed the point of the prior hang. > > ETA is 7:30 EST,      12:30 am UTC > > Scott > > > On 2019-12-12 11:30 a.m., Scott Little wrote: >> The r/3.0 branch build hung yesterday. >> >> I have relaunched it, and will monitor it more closely today. >> >> Scott >> >> >> On 2019-12-11 12:43 p.m., Scott Little wrote: >>> Revised ETA for the r/3.0 build is 6:45 pm EST.     11:45 pm UTC. >>> >>> Scott >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maria.g.perez.ibarra at intel.com Fri Dec 13 17:25:25 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 13 Dec 2019 17:25:25 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Message-ID: Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-13 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Dec 13 17:32:54 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 13 Dec 2019 17:32:54 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BED38D333@fmsmsx123.amr.corp.intel.com> Are we good to go for 3.0? brucej From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Friday, December 13, 2019 9:25 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-13 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Fri Dec 13 17:58:16 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Fri, 13 Dec 2019 11:58:16 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <3cea8d$7appl6@fmsmga002.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191213T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From elio.martinez.monroy at intel.com Fri Dec 13 18:12:13 2019 From: elio.martinez.monroy at intel.com (Martinez Monroy, Elio) Date: Fri, 13 Dec 2019 18:12:13 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 In-Reply-To: <9A85D2917C58154C960D95352B22818BED38D333@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BED38D333@fmsmsx123.amr.corp.intel.com> Message-ID: >From the testing perspective I think we are, but not sure if Scott needs to give some feedback. BR Elio From: Jones, Bruce E Sent: Friday, December 13, 2019 11:33 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Are we good to go for 3.0? brucej From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Friday, December 13, 2019 9:25 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-13 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Dec 13 19:51:48 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 13 Dec 2019 19:51:48 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 In-Reply-To: References: <9A85D2917C58154C960D95352B22818BED38D333@fmsmsx123.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BED38D40A@fmsmsx123.amr.corp.intel.com> The tests were run against http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191213T023000Z/outputs/iso/ Scott's build is http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191212T162958Z/ Elio tells me there is no content difference. Ghada, Bill - are we good to declare 3.0? brucej From: Martinez Monroy, Elio Sent: Friday, December 13, 2019 10:12 AM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 >From the testing perspective I think we are, but not sure if Scott needs to give some feedback. BR Elio From: Jones, Bruce E > Sent: Friday, December 13, 2019 11:33 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Are we good to go for 3.0? brucej From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Friday, December 13, 2019 9:25 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-13 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Kozyrev at windriver.com Fri Dec 13 20:15:46 2019 From: Alex.Kozyrev at windriver.com (Kozyrev, Alexander (Alex)) Date: Fri, 13 Dec 2019 20:15:46 +0000 Subject: [Starlingx-discuss] PTP tx timestamp seem not get from NIC In-Reply-To: References: Message-ID: Hi Philip, you can capture PTP packets using tcpdump command with the "ether proto 0x88F7" parameter for L2 or "udp port 319 or udp port 320" in case of UDP. Wireshark is very helpful in understanding of PTP packets flow. Also, the pmc command is available for you to check the PTP port status and see what's going on. For example, "pmc -u -b 0 'GET PORT_DATA_SET" and pmc -u -b 0 'GET TIME_STATUS_NP' are the most useful ones in analyzing slave/master interactions. Regards, Alex From: Philip_Wang at alphanetworks.com [mailto:Philip_Wang at alphanetworks.com] Sent: December 3, 2019 21:16 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] PTP tx timestamp seem not get from NIC Dear Team, In my All-In-One environment, I check /var/log/user.log find below message. 2019-12-03T16:03:30.000 controller-0 ptp4l: warning [20954.517] clockcheck: clock jumped forward or running faster than expected! 2019-12-03T16:03:30.000 controller-0 ptp4l: notice [20954.517] port 1: SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.613] port 0015b2.fffe.a92e24-1 changed state 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.613] reconfiguring after port state change 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.613] master clock not ready, waiting... 2019-12-03T16:03:30.000 controller-0 ptp4l: notice [20954.861] port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.914] port 0015b2.fffe.a92e24-1 changed state 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.914] reconfiguring after port state change 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.914] selecting CLOCK_REALTIME for synchronization 2019-12-03T16:03:30.000 controller-0 phc2sys: info [20954.914] selecting enp0s25 as the master clock 2019-12-03T16:03:35.000 controller-0 phc2sys: info [20959.816] rms 8125465923430 max 70368744177703 freq -3070283 +/- 72810449 delay 1662 +/- 172 2019-12-03T16:03:43.000 controller-0 ptp4l: info [20967.182] rms 17864936252211 max 70368748587963 freq +17842027 +/- 101360440 delay 173334 +/- 1003918 2019-12-03T16:03:43.000 controller-0 ptp4l: warning [20967.688] clockcheck: clock jumped forward or running faster than expected! 2019-12-03T16:03:43.000 controller-0 ptp4l: notice [20967.688] port 1: SLAVE to UNCALIBRATED on SYNCHRONIZATION_FAULT 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20967.720] port 0015b2.fffe.a92e24-1 changed state 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20967.720] reconfiguring after port state change 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20967.720] master clock not ready, waiting... 2019-12-03T16:03:43.000 controller-0 ptp4l: notice [20968.031] port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20968.120] port 0015b2.fffe.a92e24-1 changed state 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20968.120] reconfiguring after port state change 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20968.120] selecting CLOCK_REALTIME for synchronization 2019-12-03T16:03:43.000 controller-0 phc2sys: info [20968.120] selecting enp0s25 as the master clock This message is seem that delay request packet TX timestamp is not get from NIC(Intel I210) cause port 1 state from SLAVE to UNCALIBRATED. Has any debug method can check the TX timestamp?? Best Reagrds, - Philip Wang --- Alpha Networks Inc. TEL: 886-3-5636666 EXT:6403 This electronic mail transmission is intended only for the named recipient. It contains information which may be privileged,confidential and exempt from disclosure under applicable law. Dissemination, distribution, or copying of this communication by anyone other than the recipient or the recipient's agent is strictly prohibited. If this electronic mail transmission is received in error, Please notify us immediately and delete the message and all attachments of it from your computer system. Thank you for your cooperation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Dec 13 22:16:56 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 13 Dec 2019 22:16:56 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 In-Reply-To: <9A85D2917C58154C960D95352B22818BED38D40A@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BED38D333@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BED38D40A@fmsmsx123.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0C1604C0A@ALA-MBD.corp.ad.wrs.com> Hi Bruce, Scott still needs to label the branch and move the build to the release location in CENGN. We will use the most recent build (20191213T023000Z); the one used in sanity. The content is the same. I expect we can officially declare the release milestone on Monday. Regards, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, December 13, 2019 2:52 PM To: starlingx-discuss at lists.starlingx.io; Khalil, Ghada; Zvonar, Bill Cc: Martinez Monroy, Elio Subject: RE: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 The tests were run against http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191213T023000Z/outputs/iso/ Scott's build is http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191212T162958Z/ Elio tells me there is no content difference. Ghada, Bill - are we good to declare 3.0? brucej From: Martinez Monroy, Elio Sent: Friday, December 13, 2019 10:12 AM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 >From the testing perspective I think we are, but not sure if Scott needs to give some feedback. BR Elio From: Jones, Bruce E > Sent: Friday, December 13, 2019 11:33 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Are we good to go for 3.0? brucej From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Friday, December 13, 2019 9:25 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-13 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Fri Dec 13 23:50:03 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 13 Dec 2019 23:50:03 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191213 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-13 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.somerville at windriver.com Fri Dec 13 23:54:05 2019 From: jim.somerville at windriver.com (Jim Somerville) Date: Fri, 13 Dec 2019 18:54:05 -0500 Subject: [Starlingx-discuss] Help wanted: stx.2.0 cherry-picks In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C1603DC0@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C1603DC0@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Ghada, I've already submitted the 4 in the middle for review, will also do 1849197 and 1849200 next week. -Jim On 2019-12-11 7:04 p.m., Khalil, Ghada wrote: > Hello, > Are you new to the StarlingX community and would like to take on a simple task to get familiar with the workflow for merging code? > > If so, I am looking for help with porting/cherry-picking a number of CVEs from master to the r/stx.2.0 branch. > Note: Some re-work maybe required as the repo's have been re-organized in master. > > Please reach out to me if you would like to take on any of these. I am targeting a time-line up to the middle of January, but would like to start asap. > > Thanks, > Ghada > > List of items: > - LP: https://bugs.launchpad.net/starlingx/+bug/1849197 > - https://review.opendev.org/#/c/695984/ > - https://review.opendev.org/#/c/695983/ > - LP: https://bugs.launchpad.net/starlingx/+bug/1849195 & https://bugs.launchpad.net/starlingx/+bug/1849203 > - https://review.opendev.org/#/c/695775/ > - LP: https://bugs.launchpad.net/starlingx/+bug/1849210 > - https://review.opendev.org/#/c/695742/ > - LP: https://bugs.launchpad.net/starlingx/+bug/1849201 > - https://review.opendev.org/#/c/695741/ > - LP: https://bugs.launchpad.net/starlingx/+bug/1849202 > - https://review.opendev.org/#/c/695740/ > - LP: https://bugs.launchpad.net/starlingx/+bug/1849200 > - https://review.opendev.org/#/c/695579/ > - https://review.opendev.org/#/c/695582/ > - https://review.opendev.org/#/c/695560/ > > > Thanks, > Ghada > From Bill.Zvonar at windriver.com Mon Dec 16 00:12:32 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Mon, 16 Dec 2019 00:12:32 +0000 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C1604C0A@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BED38D333@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BED38D40A@fmsmsx123.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0C1604C0A@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007B3DD80@ALA-MBD.corp.ad.wrs.com> I'll be traveling Monday, so please assume my thumbs up once Ghada gives hers... From: Khalil, Ghada Sent: Friday, December 13, 2019 5:17 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io; Zvonar, Bill Cc: Martinez Monroy, Elio Subject: RE: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Hi Bruce, Scott still needs to label the branch and move the build to the release location in CENGN. We will use the most recent build (20191213T023000Z); the one used in sanity. The content is the same. I expect we can officially declare the release milestone on Monday. Regards, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, December 13, 2019 2:52 PM To: starlingx-discuss at lists.starlingx.io; Khalil, Ghada; Zvonar, Bill Cc: Martinez Monroy, Elio Subject: RE: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 The tests were run against http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191213T023000Z/outputs/iso/ Scott's build is http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191212T162958Z/ Elio tells me there is no content difference. Ghada, Bill - are we good to declare 3.0? brucej From: Martinez Monroy, Elio Sent: Friday, December 13, 2019 10:12 AM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 >From the testing perspective I think we are, but not sure if Scott needs to give some feedback. BR Elio From: Jones, Bruce E > Sent: Friday, December 13, 2019 11:33 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Are we good to go for 3.0? brucej From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Friday, December 13, 2019 9:25 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 Status of the Sanity Test for RC 3.0 CENGN ISO: bootimage.iso from 2019-December-13 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From minx.she at intel.com Mon Dec 16 05:23:40 2019 From: minx.she at intel.com (She, MinX) Date: Mon, 16 Dec 2019 05:23:40 +0000 Subject: [Starlingx-discuss] How to set up a distribute cloud On virtual environment? Message-ID: <3FD5865A6C125A4AA82439278554E00A0227477A@SHSMSX105.ccr.corp.intel.com> Hello: There is no document of setting up a distribute cloud On virtual environment. After 3.0 comes out, will there be one? She min -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Mon Dec 16 05:44:30 2019 From: austin.sun at intel.com (Sun, Austin) Date: Mon, 16 Dec 2019 05:44:30 +0000 Subject: [Starlingx-discuss] Ceph Containerization feature discuss Message-ID: Hi All: There was a good discuss about Ceph Containerization task and schedule during the 10th china open source hackathon. open questions/risk: 1.Ceph Images will be pull directly from upstream or built from stx 2. Sysinv integration with Ceph. 3. Ceph feature branch ? needed branch ha / stx-puppet / config / platform-armada-app / ansible-playbooks / integ / utilities 4.rook -client code repo like ceph-client 5. librados / rgw provide 6. Test case/Test Strategy. Schedule/Plan: 1)Python Rook Plugin for Sysinv and Command Tingjie / 2020-1-1 2)Bootstrap and Helm-Override Mapping Martin / 2020-1-4 3)CSI Provider Replace Rbd Provider for Sysinv KubeApp Martin / 2020-1-20 4)SysInv Openstack Interface support provisioning (Cinder/Swift) Tingjie / 2020-1-20 5)Integration/System command/Patch upload ??? Wait for the 4th task finish and Plan 6)Old code cleanup The Detail info was put in etherpad [1], but it was changed simple Chinese characters in some section. Thanks Tingjie, Martin and Yong. [1] https://etherpad.openstack.org/p/OpenSource-Hackathon-Ceph-Containerization Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Mon Dec 16 09:11:01 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Mon, 16 Dec 2019 03:11:01 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <8c37da$6f93kk@orsmga003.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191216T000000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From kristal.dale at intel.com Mon Dec 16 17:47:17 2019 From: kristal.dale at intel.com (Dale, Kristal) Date: Mon, 16 Dec 2019 17:47:17 +0000 Subject: [Starlingx-discuss] How to set up a distribute cloud On virtual environment? In-Reply-To: <3FD5865A6C125A4AA82439278554E00A0227477A@SHSMSX105.ccr.corp.intel.com> References: <3FD5865A6C125A4AA82439278554E00A0227477A@SHSMSX105.ccr.corp.intel.com> Message-ID: <43F963BD1517044B90CF82FB2F3CA71623469C0A@fmsmsx120.amr.corp.intel.com> Hi She min, Currently, we have an Install guide for setting up distributed could on bare metal (R3): https://docs.starlingx.io/deploy_install_guides/r3_release/distributed_cloud/index.html Currently, we do not have a specific plan for adding an install guide for distributed cloud on bare metal. However, it would be a great addition! I've added this topic to this weeks docs team meeting agenda to discuss what the guide would look like. https://etherpad.openstack.org/p/stx-documentation Please join us if you have an interest in contributing to the docs or would like join the discussion! We meet Wednesdays 12:30 PST. https://wiki.openstack.org/wiki/Starlingx/Meetings<%20https:/wiki.openstack.org/wiki/Starlingx/Meetings> Cheers, Kris From: She, MinX Sent: Sunday, December 15, 2019 9:24 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to set up a distribute cloud On virtual environment? Hello: There is no document of setting up a distribute cloud On virtual environment. After 3.0 comes out, will there be one? She min -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Dec 16 18:25:29 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 16 Dec 2019 18:25:29 +0000 Subject: [Starlingx-discuss] Community call this week Message-ID: <9A85D2917C58154C960D95352B22818BED38F6C2@fmsmsx123.amr.corp.intel.com> We will hold our community call this week as usual. Bill has asked me to host. Please find the agenda at the usual place [1] and feel free to add any items you'd like to discuss. Brucej [1] https://etherpad.openstack.org/p/stx-status -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Dec 16 19:04:46 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 16 Dec 2019 14:04:46 -0500 Subject: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 In-Reply-To: <9A85D2917C58154C960D95352B22818BED38D40A@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BED38D333@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BED38D40A@fmsmsx123.amr.corp.intel.com> Message-ID: <64da8aa2-9e9b-c155-042b-f0f993bb6fba@windriver.com> StarlingX release 3.0.0 has been published in cengn, and tags pushed to git. http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0/centos/ Scott On 2019-12-13 2:51 p.m., Jones, Bruce E wrote: > > The tests were run against > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191213T023000Z/outputs/iso/ > > Scott’s build is > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191212T162958Z/ > > Elio tells me there is no content difference. > > Ghada, Bill – are we good to declare 3.0? > >          brucej > > *From:*Martinez Monroy, Elio > *Sent:* Friday, December 13, 2019 10:12 AM > *To:* Jones, Bruce E ; > starlingx-discuss at lists.starlingx.io > *Subject:* RE: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 > > From the testing perspective I think we are, but not sure if Scott > needs to give some feedback. > > BR > > Elio > > *From:*Jones, Bruce E > > *Sent:* Friday, December 13, 2019 11:33 AM > *To:* starlingx-discuss at lists.starlingx.io > > *Subject:* Re: [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 > > Are we good to go for 3.0? > >       brucej > > *From:*Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] > *Sent:* Friday, December 13, 2019 9:25 AM > *To:* starlingx-discuss at lists.starlingx.io > > *Subject:* [Starlingx-discuss] Sanity Test - RC 3.0 ISO - 20191213 > > *Status of the Sanity Test for RC 3.0 CENGN ISO:*/bootimage.iso from > 2019-December-13/(link > ) > > Status:*GREEN * > > =========================================== > > Sanity Test is executed in a/_Containers – Bare Metal Environment_/ > > *AIO –Simplex* > > ** > > Setup             04 TCs[PASS] > > Provisioning      01 TCs [PASS] > > Sanity OpenStack 49TCs [PASS] > > Sanity Platform   07TCs [PASS] > > TOTAL:[ 61 TCs ] > > *AIO –Duplex* > > ** > > Setup             04 TCs[PASS] > > Provisioning      01 TCs [PASS] > > Sanity OpenStack 52TCs [PASS] > > Sanity Platform   07TCs [PASS] > > TOTAL:[ 64 TCs ] > > *Standard-Local Storage (2+2)* > > Setup             04TCs [PASS] > > Provisioning      01 TCs [PASS] > > Sanity OpenStack 52TCs [PASS] > > Sanity Platform   08TCs [PASS] > > TOTAL:[ 65 TCs ] > > *Standard- DedicatedStorage (2+2+2)* > > Setup  04TCs [PASS] > > Provisioning      01 TCs [PASS] > > Sanity OpenStack 52TCs [PASS] > > Sanity Platform   09TCs [PASS] > > TOTAL:[ 66 TCs ] > > =========================================== > > Sanity Test is executed in a/_Containers – Virtual Environment_/ > > *AIO –Simplex* > > ** > > Setup             04 TCs[PASS] > > Provisioning      01 TCs [PASS] > > Sanity OpenStack 49TCs [PASS] > > Sanity Platform   07TCs [PASS] > > TOTAL:[ 61 TCs ] > > *AIO –Duplex* > > ** > > Setup             04 TCs[PASS] > > Provisioning      01 TCs [PASS] > > Sanity OpenStack 52TCs [PASS] > > Sanity Platform   07TCs [PASS] > > TOTAL:[ 64 TCs ] > > *Standard-Local Storage (2+2)* > > Setup             04TCs [PASS] > > Provisioning      01 TCs [PASS] > > Sanity OpenStack 52TCs [PASS] > > Sanity Platform   08TCs [PASS] > > TOTAL:[ 65 TCs ] > > *Standard- DedicatedStorage (2+2+2)* > > Setup  04TCs [PASS] > > Provisioning      01 TCs [PASS] > > Sanity OpenStack 52TCs [PASS] > > Sanity Platform   09TCs [PASS] > > TOTAL:[ 66 TCs ] > > ** > > regards > > Maria G > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Mon Dec 16 19:09:43 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 16 Dec 2019 11:09:43 -0800 Subject: [Starlingx-discuss] CentOS 8 Feature work Message-ID: <30e40625-82c0-4db4-3b1c-8600664c659e@linux.intel.com> Folks, I think we are ready to start merging the centos8 work, please work to remove the -workflow so we can start tracking what can get merged better. Since almost all of your work has Depends-On, they won't merge until the dependent reviews merge. The Tools and Root changes need another +2 and +W as appropriate. I also think we need a manifest update with the f/centos8 branch defined. I know there are still some patches that need some deeper review and cleanup, but I think the core patches are ready. Sau! From scott.little at windriver.com Mon Dec 16 19:12:51 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 16 Dec 2019 14:12:51 -0500 Subject: [Starlingx-discuss] Branch r/stx.3.0 is frozen In-Reply-To: <14b9ae53-736b-b65c-e892-cf837276501a@windriver.com> References: <41aceaa0-9a20-eb18-a5bc-a3d1d0654144@windriver.com> <1f0eb4f2-9d39-adfc-61c9-f454b96fe8e6@windriver.com> <8dcb0328-5239-d173-cb00-69a828e365e8@windriver.com> <7a9fcaa4-1efa-6840-a2db-e465d019e3f5@windriver.com> <14b9ae53-736b-b65c-e892-cf837276501a@windriver.com> Message-ID: The 3.0.0 release has been published on cengn. http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0/centos/ The r/3.0 branch is open for updates destined for 3.0.1 Scott On 2019-12-13 11:53 a.m., Scott Little wrote: > The intended build was a success > > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/20191212T162958Z > > > I see that I failed the discontinue nightly builds, so we have one > additional build timestamped 20191213T023000Z.  Happily it's changelog > is empty, so there is no content difference if you used that one > explicitly, or through the latest-build link. > > I have disabled the nightly build of r/3.0, and will return the master > branch build to it's original time slot.  Further builds of r/3.0 will > be by request only. > > I'll tag and publish 3.0.0 when we get a favorable result from testing. > > Scott > > > > On 2019-12-12 12:21 p.m., Scott Little wrote: >> We have passed the point of the prior hang. >> >> ETA is 7:30 EST,      12:30 am UTC >> >> Scott >> >> >> On 2019-12-12 11:30 a.m., Scott Little wrote: >>> The r/3.0 branch build hung yesterday. >>> >>> I have relaunched it, and will monitor it more closely today. >>> >>> Scott >>> >>> >>> On 2019-12-11 12:43 p.m., Scott Little wrote: >>>> Revised ETA for the r/3.0 build is 6:45 pm EST.     11:45 pm UTC. >>>> >>>> Scott >>>> >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Mon Dec 16 19:35:46 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 16 Dec 2019 19:35:46 +0000 Subject: [Starlingx-discuss] Branch r/stx.3.0 is frozen In-Reply-To: References: <41aceaa0-9a20-eb18-a5bc-a3d1d0654144@windriver.com> <1f0eb4f2-9d39-adfc-61c9-f454b96fe8e6@windriver.com> <8dcb0328-5239-d173-cb00-69a828e365e8@windriver.com> <7a9fcaa4-1efa-6840-a2db-e465d019e3f5@windriver.com> <14b9ae53-736b-b65c-e892-cf837276501a@windriver.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0C160558A@ALA-MBD.corp.ad.wrs.com> Thanks Scott. Saul/Don, Can you please review/merge the commit that updates the manifest of the r/stx.3.0 branch? https://review.opendev.org/#/c/699252/ I think this needs to merge before we officially declare the stx.3.0 release and re-open the branch for submissions for the mtce release. Thanks, Ghada -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: Monday, December 16, 2019 2:13 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Branch r/stx.3.0 is frozen The 3.0.0 release has been published on cengn. http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0/centos/ The r/3.0 branch is open for updates destined for 3.0.1 Scott On 2019-12-13 11:53 a.m., Scott Little wrote: > The intended build was a success > > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/3.0/centos/201912 > 12T162958Z > > > I see that I failed the discontinue nightly builds, so we have one > additional build timestamped 20191213T023000Z.  Happily it's changelog > is empty, so there is no content difference if you used that one > explicitly, or through the latest-build link. > > I have disabled the nightly build of r/3.0, and will return the master > branch build to it's original time slot.  Further builds of r/3.0 will > be by request only. > > I'll tag and publish 3.0.0 when we get a favorable result from testing. > > Scott > > > > On 2019-12-12 12:21 p.m., Scott Little wrote: >> We have passed the point of the prior hang. >> >> ETA is 7:30 EST,      12:30 am UTC >> >> Scott >> >> >> On 2019-12-12 11:30 a.m., Scott Little wrote: >>> The r/3.0 branch build hung yesterday. >>> >>> I have relaunched it, and will monitor it more closely today. >>> >>> Scott >>> >>> >>> On 2019-12-11 12:43 p.m., Scott Little wrote: >>>> Revised ETA for the r/3.0 build is 6:45 pm EST.     11:45 pm UTC. >>>> >>>> Scott >>>> >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus >>>> s >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Mon Dec 16 23:15:29 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 16 Dec 2019 23:15:29 +0000 Subject: [Starlingx-discuss] stx.3.0 Release milestone declared Message-ID: <151EE31B9FCCA54397A757BC674650F0C160583E@ALA-MBD.corp.ad.wrs.com> Hello all, This email announces that the stx.3.0 Release milestone has been achieved as of Dec 16, 2019. StarlingX release 3.0 is officially delivered! It is available on CENGN at: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0 Release Notes at: https://docs.starlingx.io/releasenotes/r3_release.html This release delivers 15 new features and 182 bug fixes to StarlingX. See the release notes for a full list of features. Thank you to everyone in the Community - from development, test and documentation - for all of your hard work in delivering this release! Regards, Ghada On behalf of the StarlingX Release team From Ian.Jolliffe at windriver.com Mon Dec 16 23:35:13 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Mon, 16 Dec 2019 23:35:13 +0000 Subject: [Starlingx-discuss] stx.3.0 Release milestone declared In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C160583E@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C160583E@ALA-MBD.corp.ad.wrs.com> Message-ID: Congratulations to the whole community on a fantastic release! > On Dec 16, 2019, at 5:16 PM, Khalil, Ghada wrote: > > Hello all, > This email announces that the stx.3.0 Release milestone has been achieved as of Dec 16, 2019. StarlingX release 3.0 is officially delivered! > > It is available on CENGN at: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0 > Release Notes at: https://docs.starlingx.io/releasenotes/r3_release.html > > This release delivers 15 new features and 182 bug fixes to StarlingX. See the release notes for a full list of features. > > Thank you to everyone in the Community - from development, test and documentation - for all of your hard work in delivering this release! > > > Regards, > Ghada > On behalf of the StarlingX Release team > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From david.a.cobbley at intel.com Mon Dec 16 23:38:27 2019 From: david.a.cobbley at intel.com (Cobbley, David A) Date: Mon, 16 Dec 2019 23:38:27 +0000 Subject: [Starlingx-discuss] stx.3.0 Release milestone declared In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C160583E@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C160583E@ALA-MBD.corp.ad.wrs.com> Message-ID: An amazing accomplishment by all involved. Two major releases (and a minor release) completed in the last four months of the year! Thanks for the strong efforts, dedication and persistence. -----Original Message----- From: Khalil, Ghada Sent: Monday, December 16, 2019 3:15 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx.3.0 Release milestone declared Hello all, This email announces that the stx.3.0 Release milestone has been achieved as of Dec 16, 2019. StarlingX release 3.0 is officially delivered! It is available on CENGN at: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0 Release Notes at: https://docs.starlingx.io/releasenotes/r3_release.html This release delivers 15 new features and 182 bug fixes to StarlingX. See the release notes for a full list of features. Thank you to everyone in the Community - from development, test and documentation - for all of your hard work in delivering this release! Regards, Ghada On behalf of the StarlingX Release team _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maria.g.perez.ibarra at intel.com Mon Dec 16 23:45:03 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 16 Dec 2019 23:45:03 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191216 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-16 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Dec 17 00:21:51 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 17 Dec 2019 00:21:51 +0000 Subject: [Starlingx-discuss] stx.3.0 Release milestone declared In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0C160583E@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F361018B2@SHSMSX104.ccr.corp.intel.com> Great to see that we hit the stx3.0 release milestone before holidays!!! Cindy Xie IAGS -----Original Message----- From: Jolliffe, Ian Sent: Tuesday, December 17, 2019 7:35 AM To: Khalil, Ghada Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx.3.0 Release milestone declared Congratulations to the whole community on a fantastic release! > On Dec 16, 2019, at 5:16 PM, Khalil, Ghada wrote: > > Hello all, > This email announces that the stx.3.0 Release milestone has been achieved as of Dec 16, 2019. StarlingX release 3.0 is officially delivered! > > It is available on CENGN at: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0 > Release Notes at: https://docs.starlingx.io/releasenotes/r3_release.html > > This release delivers 15 new features and 182 bug fixes to StarlingX. See the release notes for a full list of features. > > Thank you to everyone in the Community - from development, test and documentation - for all of your hard work in delivering this release! > > > Regards, > Ghada > On behalf of the StarlingX Release team > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vivian.zhu at intel.com Tue Dec 17 02:40:19 2019 From: vivian.zhu at intel.com (Zhu, Vivian) Date: Tue, 17 Dec 2019 02:40:19 +0000 Subject: [Starlingx-discuss] Ceph Containerization feature discuss In-Reply-To: References: Message-ID: <371DF9A763E9F44F924F4A821FC070264DBA933D@SHSMSX105.ccr.corp.intel.com> Thanks Austin to write down the clear step and schedule plan for ceph contamination feature targeting for stx4.0. - Vivian SSP NST Storage Tel: (8621)61167437 From: Sun, Austin Sent: Monday, December 16, 2019 1:45 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] Ceph Containerization feature discuss Hi All: There was a good discuss about Ceph Containerization task and schedule during the 10th china open source hackathon. open questions/risk: 1.Ceph Images will be pull directly from upstream or built from stx 2. Sysinv integration with Ceph. 3. Ceph feature branch ? needed branch ha / stx-puppet / config / platform-armada-app / ansible-playbooks / integ / utilities 4.rook -client code repo like ceph-client 5. librados / rgw provide 6. Test case/Test Strategy. Schedule/Plan: 1)Python Rook Plugin for Sysinv and Command Tingjie / 2020-1-1 2)Bootstrap and Helm-Override Mapping Martin / 2020-1-4 3)CSI Provider Replace Rbd Provider for Sysinv KubeApp Martin / 2020-1-20 4)SysInv Openstack Interface support provisioning (Cinder/Swift) Tingjie / 2020-1-20 5)Integration/System command/Patch upload ??? Wait for the 4th task finish and Plan 6)Old code cleanup The Detail info was put in etherpad [1], but it was changed simple Chinese characters in some section. Thanks Tingjie, Martin and Yong. [1] https://etherpad.openstack.org/p/OpenSource-Hackathon-Ceph-Containerization Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Tue Dec 17 07:05:55 2019 From: yong.hu at intel.com (Yong Hu) Date: Tue, 17 Dec 2019 15:05:55 +0800 Subject: [Starlingx-discuss] [stx.distro.openstack] WW51 project meeting - https://zoom.us/j/342730236 In-Reply-To: References: Message-ID: <4a54d3ad-6db4-7868-edd2-2b715b635f02@intel.com> Hi folks, Today we (stx.distro.openstack tam) are going to have the project meeting at 10:00 PM China time or 6:00 AM US PST. Welcome to join us! Here is the agenda for WW51: 1. stx.3.0: release was claimed on Monday. THANK all of the contributors to make this happen! 2. stx.3.0 maintenance release outlook: - we need to resolve all LPs which are with HIGH priority and tagged with stx.3.0. See the details in [1]. - Once the fix is available on master branch, we need to cherry pick it to r/stx.3.0 branch. 3. stx.4.0 planning and update. 4. A quick sharing about "China OpenSource project the 10th Hackathon" in Beijing on Dec 15th and 16th [0] distro.openstack etherpad: https://etherpad.openstack.org/p/stx-distro-openstack-meetings [1] distro.openstack Launchpad: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.openstack[2] zoom bridge for meeting: https://zoom.us/j/342730236 Regards, Yong From austin.sun at intel.com Tue Dec 17 08:16:44 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 17 Dec 2019 08:16:44 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 12/18/2019 Message-ID: Agenda for 12/18 meeting: * stx.3.0 release was claimed on 16th Dec, Congratulation and Thanks Team. if any high bug tag stx.3.0 , please fix asap and cherry-pick to stx.3.0 branch * stx.4.0 feature - Ceph Containerization (Tingjie/Martin) Open question / plan https://etherpad.openstack.org/p/OpenSource-Hackathon-Ceph-Containerization - Standardize Flock Package Versioning (JITStack-Daniels) - Kata Container (Shuicheng) - CentOS 8.0 upgrade planning (Shuai Zhao) * stx 3.0 bugs fix there is no high bug for non-distro project , is there any medium bug want to promotion to high and then fix for stx.3.0 ? * Open cancel Dec 25th/Jan 1st meeting because Christmas and New Year. Update the agenda if other topic to be discussed : https://etherpad.openstack.org/p/stx-distro-other Thanks. BR Austin Sun. From haochuan.z.chen at intel.com Tue Dec 17 08:46:47 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 17 Dec 2019 08:46:47 +0000 Subject: [Starlingx-discuss] question about ceph or storage Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628DE96@CDSMSX102.ccr.corp.intel.com> Hi Bob & Ovidiu Some question about ceph or storage. 1, What’s storage tier and storage profile? What’s the 2, why for duplex it request such puppet class dependency in ceph.pp? Is this request make all drbd config before class ceph? Drbd::Resource <| |> -> Class['::ceph'] And flag file “.node_ceph_configured”, to inform drbd make init setup before ceph config? 3, To launch ceph-mon, create a logical volume “ceph-mon-lv” and mount to /var/lib/ceph/mon, not directly mkdir “/var/lib/ceph/mon” for ceph-mon Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, December 11, 2019 4:46 PM To: Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' ; Poncea, Ovidiu ; Qi, Mingyuan Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob Some question, what’s storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor See inline… From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Tue Dec 17 09:17:29 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Tue, 17 Dec 2019 03:17:29 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <412985$63snl7@orsmga007.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191217T000000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From jose.perez.carranza at intel.com Tue Dec 17 13:00:40 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 17 Dec 2019 13:00:40 +0000 Subject: [Starlingx-discuss] Weekly Testing Meeting Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A3778C2BB@FMSMSX105.amr.corp.intel.com> When: Tuesday, December 17, 2019 11:00 AM-11:30 AM. (UTC-06:00) Guadalajara, Mexico City, Monterrey *~*~*~*~*~*~*~*~*~* Please forward this meeting if I’m missing someone. Weekly meetings on Tuesdays at 9am Pacific • Zoom link: https://zoom.us/j/342730236 • Dialing in from phone: • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 • Meeting ID: 342 730 236 • International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2090 bytes Desc: not available URL: From Dariush.Eslimi at windriver.com Tue Dec 17 13:45:58 2019 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Tue, 17 Dec 2019 13:45:58 +0000 Subject: [Starlingx-discuss] stx.3.0 Release milestone declared In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C160583E@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C160583E@ALA-MBD.corp.ad.wrs.com> Message-ID: Congratulations on the successful release of stx-3.0! It looks great! It really marks a major milestone in the evolution of the project as there have been numerous key developments since the introduction of the initial version. Thank you for all your time and hard-work. Regards, Dariush -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: December-16-19 6:15 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx.3.0 Release milestone declared Hello all, This email announces that the stx.3.0 Release milestone has been achieved as of Dec 16, 2019. StarlingX release 3.0 is officially delivered! It is available on CENGN at: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0 Release Notes at: https://docs.starlingx.io/releasenotes/r3_release.html This release delivers 15 new features and 182 bug fixes to StarlingX. See the release notes for a full list of features. Thank you to everyone in the Community - from development, test and documentation - for all of your hard work in delivering this release! Regards, Ghada On behalf of the StarlingX Release team _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From jose.perez.carranza at intel.com Tue Dec 17 17:12:37 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 17 Dec 2019 17:12:37 +0000 Subject: [Starlingx-discuss] Weekly Testing Meeting Message-ID: <8D865D7F-F690-4FE9-83FC-AB8D441399C6@intel.com> Summary: Agenda for 12/17 Attendees: JoseP, Bruce, JC, MariaP, Yang 1. Sanity Status: Cristopher: Just green on recent sanity Yang: same status 2. Official Release 3.0 Mail was sent to mailing list with notification of the oficial release, let’s celebrate !!! 3. Unified Sanity. - Any update on GDC infrastructure change ? No updates yet 4. Opens Yang: STX API documents need to be updated and we need some way to test APIs. Open up discussion con community meeting CNCF Test cases, would be nice to execute those test cases. Check with Ada about this task. Regards Jose -- From: jose.perez.carranza at intel.com When: 11:00 AM - 11:30 AM December 17, 2019 Subject: Weekly Testing Meeting Please forward this meeting if I’m missing someone. Weekly meetings on Tuesdays at 9am Pacific • Zoom link: https://zoom.us/j/342730236 • Dialing in from phone: • Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 • Meeting ID: 342 730 236 • International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Dec 17 17:37:11 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 17 Dec 2019 17:37:11 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191217 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-17 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Dec 17 18:30:26 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 17 Dec 2019 18:30:26 +0000 Subject: [Starlingx-discuss] Customer Question - TCP between pods? Message-ID: <9A85D2917C58154C960D95352B22818BED390125@fmsmsx123.amr.corp.intel.com> Forgive me for asking what may be an obvious question. I was meeting with a customer today and the question came up - does StarlingX use/enable TCP between pods? Their experience was with K3S/Flannel which uses UDP. Thank you! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Dec 17 19:02:00 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 17 Dec 2019 19:02:00 +0000 Subject: [Starlingx-discuss] Customer request - Profinet support? Message-ID: <9A85D2917C58154C960D95352B22818BED3901B1@fmsmsx123.amr.corp.intel.com> Does anyone know if StarlingX supports or can support Profinet? We have a potential industrial user who requires this support. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Wed Dec 18 09:09:45 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Wed, 18 Dec 2019 03:09:45 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <3cea8d$7cag7b@fmsmga002.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191218T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Ovidiu.Poncea at windriver.com Wed Dec 18 11:46:42 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Wed, 18 Dec 2019 11:46:42 +0000 Subject: [Starlingx-discuss] question about ceph or storage In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D85628DE96@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D85628DE96@CDSMSX102.ccr.corp.intel.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19EA4747683@ALA-MBD.corp.ad.wrs.com> Hi Chen, see inline. ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Tuesday, December 17, 2019 10:46 AM To: Church, Robert; Poncea, Ovidiu Cc: 'starlingx-discuss at lists.starlingx.io' Subject: question about ceph or storage Hi Bob & Ovidiu Some question about ceph or storage. 1, What’s storage tier and storage profile? What’s the [Ovi] Storage tiering is equivalent with this: https://ceph.io/planet/deploying-ceph-with-storage-tiering/ . [Ovi] Profiles are managed by system storprofile-* and system host-apply-profile and are used to copy configuration from one node to another, identical node on initial provisioning. These profiles are only in system inventory, there is no Ceph equivalent. 2, why for duplex it request such puppet class dependency in ceph.pp? Is this request make all drbd config before class ceph? [Ovi] ceph-mon in AIO-DX is DRBD managed and it has a single, floating, monitor. On DX, when you swact, the monitor is stopped on the active controller and started on the standby controller. Drbd::Resource <| |> -> Class['::ceph'] And flag file “.node_ceph_configured”, to inform drbd make init setup before ceph config? 3, To launch ceph-mon, create a logical volume “ceph-mon-lv” and mount to /var/lib/ceph/mon, not directly mkdir “/var/lib/ceph/mon” for ceph-mon [Ovi] Ceph monitors have their own logical volume. They are managed through "system ceph-mon*" commands. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, December 11, 2019 4:46 PM To: Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' ; Poncea, Ovidiu ; Qi, Mingyuan Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob Some question, what’s storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor See inline… From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Wed Dec 18 11:46:35 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 18 Dec 2019 11:46:35 +0000 Subject: [Starlingx-discuss] Customer Question - TCP between pods? Message-ID: Calico does not use an overlay for communication between Pods. It leverages a direct layer3 routed network which will route the Pod payload protocol without encapsulation. The only exception is inter-host communication that uses an IPinIP tunnel, so those will carry an extra outer IP header. -Matt From: "Jones, Bruce E" Date: Tuesday, December 17, 2019 at 1:31 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Customer Question - TCP between pods? Forgive me for asking what may be an obvious question. I was meeting with a customer today and the question came up – does StarlingX use/enable TCP between pods? Their experience was with K3S/Flannel which uses UDP. Thank you! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Wed Dec 18 12:29:43 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 18 Dec 2019 12:29:43 +0000 Subject: [Starlingx-discuss] Ceph Containerization feature discuss In-Reply-To: References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC281C525@ALA-MBD.corp.ad.wrs.com> One significant item that seems to be missing here is the migration for users on stx3 to to stx4 with rook/ceph. We simply just can’t rip out the existing code and tell users to re-deploy. Brent From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Monday, December 16, 2019 12:45 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] Ceph Containerization feature discuss Hi All: There was a good discuss about Ceph Containerization task and schedule during the 10th china open source hackathon. open questions/risk: 1.Ceph Images will be pull directly from upstream or built from stx 2. Sysinv integration with Ceph. 3. Ceph feature branch ? needed branch ha / stx-puppet / config / platform-armada-app / ansible-playbooks / integ / utilities 4.rook -client code repo like ceph-client 5. librados / rgw provide 6. Test case/Test Strategy. Schedule/Plan: 1)Python Rook Plugin for Sysinv and Command Tingjie / 2020-1-1 2)Bootstrap and Helm-Override Mapping Martin / 2020-1-4 3)CSI Provider Replace Rbd Provider for Sysinv KubeApp Martin / 2020-1-20 4)SysInv Openstack Interface support provisioning (Cinder/Swift) Tingjie / 2020-1-20 5)Integration/System command/Patch upload ??? Wait for the 4th task finish and Plan 6)Old code cleanup The Detail info was put in etherpad [1], but it was changed simple Chinese characters in some section. Thanks Tingjie, Martin and Yong. [1] https://etherpad.openstack.org/p/OpenSource-Hackathon-Ceph-Containerization Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From saichandu.behara at calsoftinc.com Wed Dec 18 13:19:32 2019 From: saichandu.behara at calsoftinc.com (Saichandu Behara) Date: Wed, 18 Dec 2019 18:49:32 +0530 Subject: [Starlingx-discuss] Calsoft's Contribution- StarlingX simplex All-in-one Openstack Deployment In-Reply-To: <6a578ed9-2090-4891-1078-5f44323e7976@calsoftinc.com> References: <6a578ed9-2090-4891-1078-5f44323e7976@calsoftinc.com> Message-ID: Hi Bill, Any update of Latest Images and Documentation ? http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007064.html Thanks & Regards Sai Chandu Behara On 26-11-2019 17:03, Saichandu Behara wrote: > > Hello all, > > We are ready with one more contribution. > > The deployment of Openstack in StarlingX all-in-one Simplex is done > successfully. It is containing StarlingX with k8s, EdgeX and > Openstack. But we faced some challenges during whole process. Thus, to > ease the process for a newbie, we have created ready-to-use qcow2 > images. One can simply use this image to deploy the VM /(containing > StarlingX with //K8//S, EdgeX//and Openstack)/. Below are the links > for StarlingX_all-in-one_Edgex_with_Openstack setup Document and Image > link. > > Setup Document: > https://drive.google.com/open?id=110kbsRoBFZQ3J99hobu0QGxC_xchYwKk > > Image: https://drive.google.com/open?id=14jxBWnB5ydCk2sf_jc7I2B_e0aTC-rc6 > > Please let us know where can we upload these images and documents > > > -- > Thanks & Regards > Sai Chandu Behara -- Thanks & Regards Sai Chandu Behara -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Wed Dec 18 14:16:01 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 18 Dec 2019 14:16:01 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting In-Reply-To: <9700A18779F35F49AF027300A49E7C76608F668B@SHSMSX105.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C76608E9D6B@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F54CC@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F6268@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F668B@SHSMSX105.ccr.corp.intel.com> Message-ID: Shuicheng: We took out your commits and did some basic testing so thanks for providing the list. One final question: where can we review the list of the tests you have done? Or can you summarize the test cases please. Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, December 06, 2019 3:34 AM To: Miller, Frank ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, 1 more patch [0] is uploaded today. So there are 8 patches in total. With this patch, token server supports POST method for token fetch, so the WA in containerd is removed. [0]: https://review.opendev.org/697601 Best Regards Shuicheng From: Lin, Shuicheng Sent: Thursday, December 5, 2019 4:11 PM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, Glad to hear that. I have rebased all my patches to latest. And your link is correct. Feel free to contact me if you have any question with it. Thanks. Best Regards Shuicheng From: Miller, Frank > Sent: Thursday, December 5, 2019 6:07 AM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the updates. We would like to take out your changes for KATA containers for a test. Can you rebase your commits and let me know if these are all of the commits: https://review.opendev.org/#/q/topic:kata+(status:open) Once you have rebased we'll create a designer build and run a few tests and let you know if we find anything that needs to be addressed. Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, December 01, 2019 9:15 PM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I try to run busybox with kata containers by k8s, and it could run successfully in IPv6 environment. Best Regards Shuicheng From: Miller, Frank > Sent: Saturday, November 30, 2019 4:03 AM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the update. It looks like stx-openstack has not yet been tested with IPv6. But we have been testing IPv6 with kubernetes platform only and simple k8s apps. Can you confirm kata containers is working with IPv6 when stx-openstack is not applied/not used? Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, November 29, 2019 12:48 AM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I created below LP for the IPv6 deployment issue I meet. Could you help check whether IPv6 deployment is verfied before and share me the BKM for it if there is? Thanks. https://bugs.launchpad.net/starlingx/+bug/1854316 Best Regards Shuicheng From: Miller, Frank > Sent: Tuesday, November 26, 2019 11:37 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting Abbreviated minutes: Next meeting: Tuesday Dec 10 Minutes: 1. Stx.3.0 gating LPs: * Plan for the current 18 gating LPs: * 4 LPs are expected to land for stx.3.0 including the 2 Highs * 2 LPs to be marked invalid/not reproducible * 11 LPs to be re-gated to stx.4.0 * 1 LP TBD (Erich Cordoba to update 1824881) 2. Stx.4.0 features: In features: * 2006145: Kata container support [Shuicheng Lin] --> resourced and In for stx.4.0 * 2006537: Decouple Container Applications from Platform [Bob Church] --> resourced and In for stx.4.0 * 2006770: Backup & Restore - openstack [Ovidiu Poncea] --> resourced and In for stx.4.0 * 2005312: Containerize Openstack clients --> In for now but requires plan * TBD: Upversion Kubernetes and container platform components --> haven't create SB yet but will be required during stx.4.0 NOT In features: * 2006787: Smaller memory node support [Austin Sun] --> not committed for stx.4.0 but being worked on for stx.4.0 (ie: prep) * 2004008: Fault Containerization --> not In because it requires splitting GUI plugin into 2: one with shared panels, the other with the platform panels which is not resourced Etherpad with full minutes: https://etherpad.openstack.org/p/stx-containerization Frank -----Original Appointment----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, November 25, 2019 3:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Containerization Meeting When: Tuesday, November 26, 2019 9:30 AM-10:00 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 Please join me for the bi-weekly containers meeting. Agenda for November 26 meeting: 1. stx.3.0 gating work items: 18 gating LPs (down from 26 at our last meeting) * Status update for high priority LPs (2): * https://bugs.launchpad.net/starlingx/+bug/1838659 kubernetes apiserver certificate needs rotation [Mingyuan Qi] * https://bugs.launchpad.net/starlingx/+bug/1851287 Controller failed to lock following a failover due to elastic pod failure to shutdown [Dan Voiculeasa] * Medium priority LPs (16): * Status for the 4 LPs < 50 days old: * https://bugs.launchpad.net/starlingx/+bug/1851294 [Angie Wang] * https://bugs.launchpad.net/starlingx/+bug/1850438 [Steve Webster] * https://bugs.launchpad.net/starlingx/+bug/1850189 [Stefan Dinescu] * https://bugs.launchpad.net/starlingx/+bug/1846829 [David Sullivan] * Status update for the 12 LPs that >100 days old. [Al, Angie, Bart, Erich, JimG, Ran, Shuicheng, Tao] * Can any be closed as not reproducible or won't fix? * Which ones are being actively worked on? Which ones do the owners have a plan to fix? 2. stx.4.0 planning: * 2006145: Kata container support [Shuicheng Lin] - Request update from Shuicheng if final 2 test scenarios are done (IPv6 testing + external registry with username/pwd authentication) * 2006787: Smaller memory node support, aka containerize flock services [Austin Sun] - Request feature approach & spec update * 2006537: Decouple Container Applications from Platform (stx.4.0 feature) [Bob Church] - Feature status update * Other potential stx.4.0 features --> which are resourced/have plans to address in stx.4.0? * 2006770: Backup & Restore - openstack [Ovidiu Poncea] * 2005312: Containerize Openstack clients * 2004008: Fault Containerization * TBD: Upversion Kubernetes and container platform components Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers << File: ATT00002.txt >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ji at sibyl.li Wed Dec 18 14:32:45 2019 From: ji at sibyl.li (Austin Gillmann) Date: Wed, 18 Dec 2019 08:32:45 -0600 Subject: [Starlingx-discuss] Configuring Openstack SSL Certificates In-Reply-To: References: Message-ID: Bumping this again; if this is unsupported do let me know. Thanks! Austin Gillmann On Fri, Dec 6, 2019 at 8:31 PM Austin Gillmann wrote: > Hello all, > > First off thank you Robert and Martin for the assistance re: my > question about enabling swift. I found about the helm chart overrides > on my own but forgot about the extra service. > > I since successfully deployed and all is working as intended, one > minor question is how would I go about adding ssl certificates to the > Openstack API's and Horizon. I found a stub page relating to it, but > no other references except for stx-config docs that may just be for > platform services. > > Do let me know and thanks again; have a great weekend everyone! > -Austin Gillmann > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.ning at windriver.com Wed Dec 18 14:43:52 2019 From: andy.ning at windriver.com (Andy Ning) Date: Wed, 18 Dec 2019 09:43:52 -0500 Subject: [Starlingx-discuss] Configuring Openstack SSL Certificates In-Reply-To: References: Message-ID: <77a14c79-b890-8e78-3584-261e9d4a46be@windriver.com> On 2019-12-18 09:32 AM, Austin Gillmann wrote: > Bumping this again; if this is unsupported do let me know. > > Thanks! > Austin Gillmann > > On Fri, Dec 6, 2019 at 8:31 PM Austin Gillmann > wrote: > > Hello all, > > First off thank you Robert and Martin for the assistance re: my > question about enabling swift. I found about the helm chart overrides > on my own but forgot about the extra service. > > I since successfully deployed and all is working as intended, one > minor question is how would I go about adding ssl certificates to the > Openstack API's and Horizon. I found a stub page relating to it, but > no other references except for stx-config docs that may just be for > platform services. > I believe that "system certificate-install -m ssl " will enable both platform as well as containerized openstack APIs and Horizon with that certificate. The openstack APIs and Horizon would then be accessible by https://:443 Andy > Do let me know and thanks again; have a great weekend everyone! > -Austin Gillmann > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Dec 18 14:46:41 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 18 Dec 2019 14:46:41 +0000 Subject: [Starlingx-discuss] Ceph Containerization feature discuss In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EC281C525@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EC281C525@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Brent : Thanks your mention this. We just discussed in non-distro meeting. Tingjie and Martin will think a way for ceph upgrade from stx.3.0 to stx.4.0 Thanks. BR Austin Sun. From: Rowsell, Brent Sent: Wednesday, December 18, 2019 8:30 PM To: Sun, Austin ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] Ceph Containerization feature discuss One significant item that seems to be missing here is the migration for users on stx3 to to stx4 with rook/ceph. We simply just can’t rip out the existing code and tell users to re-deploy. Brent From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Monday, December 16, 2019 12:45 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] Ceph Containerization feature discuss Hi All: There was a good discuss about Ceph Containerization task and schedule during the 10th china open source hackathon. open questions/risk: 1.Ceph Images will be pull directly from upstream or built from stx 2. Sysinv integration with Ceph. 3. Ceph feature branch ? needed branch ha / stx-puppet / config / platform-armada-app / ansible-playbooks / integ / utilities 4.rook -client code repo like ceph-client 5. librados / rgw provide 6. Test case/Test Strategy. Schedule/Plan: 1)Python Rook Plugin for Sysinv and Command Tingjie / 2020-1-1 2)Bootstrap and Helm-Override Mapping Martin / 2020-1-4 3)CSI Provider Replace Rbd Provider for Sysinv KubeApp Martin / 2020-1-20 4)SysInv Openstack Interface support provisioning (Cinder/Swift) Tingjie / 2020-1-20 5)Integration/System command/Patch upload ??? Wait for the 4th task finish and Plan 6)Old code cleanup The Detail info was put in etherpad [1], but it was changed simple Chinese characters in some section. Thanks Tingjie, Martin and Yong. [1] https://etherpad.openstack.org/p/OpenSource-Hackathon-Ceph-Containerization Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Dec 18 15:00:01 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 18 Dec 2019 15:00:01 +0000 Subject: [Starlingx-discuss] MoM: Weekly StarlingX non-OpenStack distro meeting, 12/18/2019 Message-ID: Hi All: Thanks join the meeting , Merry Christmas and Happy New Year in advance, See you in 2020. MoM for 12/18 meeting: * stx.3.0 release was claimed on 16th Dec, Congratulation and Thanks Team. if any high bug tag stx.3.0 , please fix asap and cherry-pick to stx.3.0 branch * stx.4.0 feature - Ceph Containerization (Tingjie/Martin) Open question / plan https://etherpad.openstack.org/p/OpenSource-Hackathon-Ceph-Containerization - Standardize Flock Package Versioning (JITStack-Daniels) 2 issues for faults and ansible playbook. and update some patch for last week. - Kata Container (Shuicheng) Still waiting for WR test result. - CentOS 8.0 upgrade planning (Shuai Zhao) Srpms are almost finished expect openstack depends. rebased and remove workflow-1. tarball 9 tarballs are not started yet. container builder iso , build successfully. next step will install iso and try boot. issues in kernel review is being fixed . ceph-13.2 on-hold for now, if ceph containerization containers repo need centos8 branch. rebase patches and push those to be merged. * stx 3.0 bugs fix there is no high bug for non-distro project , is there any medium bug want to promotion to high and then fix for stx.3.0 ? * Open cancel Dec 25th/ Jan 1st meeting because Christmas and new year. Thanks. BR Austin Sun. From Ghada.Khalil at windriver.com Wed Dec 18 15:52:52 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 18 Dec 2019 15:52:52 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Release meeting Message-ID: <151EE31B9FCCA54397A757BC674650F0C1606413@ALA-MBD.corp.ad.wrs.com> ** Cancelling as release items were covered in the community call today Weekly meeting on Thursday 11AM PT / 1900 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1734 bytes Desc: not available URL: From Ghada.Khalil at windriver.com Wed Dec 18 15:53:22 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 18 Dec 2019 15:53:22 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Release meeting Message-ID: <151EE31B9FCCA54397A757BC674650F0C160642A@ALA-MBD.corp.ad.wrs.com> ** Cancelling due to Christmas/New Year holidays Weekly meeting on Thursday 11AM PT / 1900 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1709 bytes Desc: not available URL: From bruce.e.jones at intel.com Wed Dec 18 18:35:54 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 18 Dec 2019 18:35:54 +0000 Subject: [Starlingx-discuss] Easy download tool Message-ID: <9A85D2917C58154C960D95352B22818BED390C89@fmsmsx123.amr.corp.intel.com> I was thinking today about improving how users can download StarlingX and want to bounce an idea off the community. What if we had a tool like this: $ get_starlingx -version -path If we had that tool, we could set up starlingx.io to let people download that little tool / script from the web, and then the tool would figure out where to find the right ISO image and bring it down, meaning we don't have to explain how to navigate the Cengn directories. Thoughts / comments / volunteers? brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Dec 18 23:33:55 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 18 Dec 2019 23:33:55 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191218 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-18 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Dec 19 00:56:19 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 19 Dec 2019 00:56:19 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting In-Reply-To: References: <9700A18779F35F49AF027300A49E7C76608E9D6B@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F54CC@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F6268@SHSMSX105.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C76608F668B@SHSMSX105.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76608F9C96@SHSMSX105.ccr.corp.intel.com> Hi Frank, Here is the test list I did. 1. Pass daily sanity test by Ada's team for AIO-SX/AIO-DX/Multi/Multi+Storage. 2. Verify container's regression test cases in link: https://docs.google.com/spreadsheets/d/1dwcBwY4Yq1Lo9Der4RylzQ6KYp0BsMHohhEmhwpauDo/edit#gid=1448647783 3. Verify no private registry/private insecure registry/private secure registry/multi private registry case. 4. Verify no docker proxy/with docker proxy case. 5. Verify IPv6 deployment. 6. Verify backup&restore for platform. Best Regards Shuicheng From: Miller, Frank Sent: Wednesday, December 18, 2019 10:16 PM To: Lin, Shuicheng ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: We took out your commits and did some basic testing so thanks for providing the list. One final question: where can we review the list of the tests you have done? Or can you summarize the test cases please. Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, December 06, 2019 3:34 AM To: Miller, Frank >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, 1 more patch [0] is uploaded today. So there are 8 patches in total. With this patch, token server supports POST method for token fetch, so the WA in containerd is removed. [0]: https://review.opendev.org/697601 Best Regards Shuicheng From: Lin, Shuicheng Sent: Thursday, December 5, 2019 4:11 PM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, Glad to hear that. I have rebased all my patches to latest. And your link is correct. Feel free to contact me if you have any question with it. Thanks. Best Regards Shuicheng From: Miller, Frank > Sent: Thursday, December 5, 2019 6:07 AM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the updates. We would like to take out your changes for KATA containers for a test. Can you rebase your commits and let me know if these are all of the commits: https://review.opendev.org/#/q/topic:kata+(status:open) Once you have rebased we'll create a designer build and run a few tests and let you know if we find anything that needs to be addressed. Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, December 01, 2019 9:15 PM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I try to run busybox with kata containers by k8s, and it could run successfully in IPv6 environment. Best Regards Shuicheng From: Miller, Frank > Sent: Saturday, November 30, 2019 4:03 AM To: Lin, Shuicheng >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Shuicheng: Thanks for the update. It looks like stx-openstack has not yet been tested with IPv6. But we have been testing IPv6 with kubernetes platform only and simple k8s apps. Can you confirm kata containers is working with IPv6 when stx-openstack is not applied/not used? Frank From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Friday, November 29, 2019 12:48 AM To: Miller, Frank >; starlingx-discuss at lists.starlingx.io Subject: RE: Minutes: StarlingX Containerization Meeting Hi Frank, I created below LP for the IPv6 deployment issue I meet. Could you help check whether IPv6 deployment is verfied before and share me the BKM for it if there is? Thanks. https://bugs.launchpad.net/starlingx/+bug/1854316 Best Regards Shuicheng From: Miller, Frank > Sent: Tuesday, November 26, 2019 11:37 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Minutes: StarlingX Containerization Meeting Abbreviated minutes: Next meeting: Tuesday Dec 10 Minutes: 1. Stx.3.0 gating LPs: * Plan for the current 18 gating LPs: * 4 LPs are expected to land for stx.3.0 including the 2 Highs * 2 LPs to be marked invalid/not reproducible * 11 LPs to be re-gated to stx.4.0 * 1 LP TBD (Erich Cordoba to update 1824881) 2. Stx.4.0 features: In features: * 2006145: Kata container support [Shuicheng Lin] --> resourced and In for stx.4.0 * 2006537: Decouple Container Applications from Platform [Bob Church] --> resourced and In for stx.4.0 * 2006770: Backup & Restore - openstack [Ovidiu Poncea] --> resourced and In for stx.4.0 * 2005312: Containerize Openstack clients --> In for now but requires plan * TBD: Upversion Kubernetes and container platform components --> haven't create SB yet but will be required during stx.4.0 NOT In features: * 2006787: Smaller memory node support [Austin Sun] --> not committed for stx.4.0 but being worked on for stx.4.0 (ie: prep) * 2004008: Fault Containerization --> not In because it requires splitting GUI plugin into 2: one with shared panels, the other with the platform panels which is not resourced Etherpad with full minutes: https://etherpad.openstack.org/p/stx-containerization Frank -----Original Appointment----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, November 25, 2019 3:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Containerization Meeting When: Tuesday, November 26, 2019 9:30 AM-10:00 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 Please join me for the bi-weekly containers meeting. Agenda for November 26 meeting: 1. stx.3.0 gating work items: 18 gating LPs (down from 26 at our last meeting) * Status update for high priority LPs (2): * https://bugs.launchpad.net/starlingx/+bug/1838659 kubernetes apiserver certificate needs rotation [Mingyuan Qi] * https://bugs.launchpad.net/starlingx/+bug/1851287 Controller failed to lock following a failover due to elastic pod failure to shutdown [Dan Voiculeasa] * Medium priority LPs (16): * Status for the 4 LPs < 50 days old: * https://bugs.launchpad.net/starlingx/+bug/1851294 [Angie Wang] * https://bugs.launchpad.net/starlingx/+bug/1850438 [Steve Webster] * https://bugs.launchpad.net/starlingx/+bug/1850189 [Stefan Dinescu] * https://bugs.launchpad.net/starlingx/+bug/1846829 [David Sullivan] * Status update for the 12 LPs that >100 days old. [Al, Angie, Bart, Erich, JimG, Ran, Shuicheng, Tao] * Can any be closed as not reproducible or won't fix? * Which ones are being actively worked on? Which ones do the owners have a plan to fix? 2. stx.4.0 planning: * 2006145: Kata container support [Shuicheng Lin] - Request update from Shuicheng if final 2 test scenarios are done (IPv6 testing + external registry with username/pwd authentication) * 2006787: Smaller memory node support, aka containerize flock services [Austin Sun] - Request feature approach & spec update * 2006537: Decouple Container Applications from Platform (stx.4.0 feature) [Bob Church] - Feature status update * Other potential stx.4.0 features --> which are resourced/have plans to address in stx.4.0? * 2006770: Backup & Restore - openstack [Ovidiu Poncea] * 2005312: Containerize Openstack clients * 2004008: Fault Containerization * TBD: Upversion Kubernetes and container platform components Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 9:30am EST / 6:30am PDT / 1430 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers << File: ATT00002.txt >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Thu Dec 19 03:07:43 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Thu, 19 Dec 2019 03:07:43 +0000 Subject: [Starlingx-discuss] question about ceph or storage In-Reply-To: <4C60D9C5C8176C47874FFF36647AA19EA4747683@ALA-MBD.corp.ad.wrs.com> References: <56829C2A36C2E542B0CCB9854828E4D85628DE96@CDSMSX102.ccr.corp.intel.com> <4C60D9C5C8176C47874FFF36647AA19EA4747683@ALA-MBD.corp.ad.wrs.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628E4C5@CDSMSX102.ccr.corp.intel.com> HI Ovidiu Great thanks for your explain! Martin, Chen SSP, Software Engineer 021-61164330 From: Poncea, Ovidiu Sent: Wednesday, December 18, 2019 7:47 PM To: Chen, Haochuan Z ; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Hi Chen, see inline. ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Tuesday, December 17, 2019 10:46 AM To: Church, Robert; Poncea, Ovidiu Cc: 'starlingx-discuss at lists.starlingx.io' Subject: question about ceph or storage Hi Bob & Ovidiu Some question about ceph or storage. 1, What's storage tier and storage profile? What's the [Ovi] Storage tiering is equivalent with this: https://ceph.io/planet/deploying-ceph-with-storage-tiering/ . [Ovi] Profiles are managed by system storprofile-* and system host-apply-profile and are used to copy configuration from one node to another, identical node on initial provisioning. These profiles are only in system inventory, there is no Ceph equivalent. 2, why for duplex it request such puppet class dependency in ceph.pp? Is this request make all drbd config before class ceph? [Ovi] ceph-mon in AIO-DX is DRBD managed and it has a single, floating, monitor. On DX, when you swact, the monitor is stopped on the active controller and started on the standby controller. Drbd::Resource <| |> -> Class['::ceph'] And flag file ".node_ceph_configured", to inform drbd make init setup before ceph config? 3, To launch ceph-mon, create a logical volume "ceph-mon-lv" and mount to /var/lib/ceph/mon, not directly mkdir "/var/lib/ceph/mon" for ceph-mon [Ovi] Ceph monitors have their own logical volume. They are managed through "system ceph-mon*" commands. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, December 11, 2019 4:46 PM To: Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu >; Qi, Mingyuan > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob Some question, what's storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor See inline... From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it's potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don't think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don't know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Thu Dec 19 07:01:03 2019 From: austin.sun at intel.com (Sun, Austin) Date: Thu, 19 Dec 2019 07:01:03 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) Message-ID: Hi Penny: Do we have the plan for below story ( python-smartpm and rpm-python) ? CentOS8 upgrade is on-going ,and since python3 is using for CentOS8 , those 2 package can not be built successfully because they don't support python3. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Sun, Austin Sent: Tuesday, July 16, 2019 2:39 PM To: Saul Wold ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 Created Story [1] for python-smartpm and [2] for rpm-python. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, July 12, 2019 10:22 AM To: Sun, Austin ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 On 7/11/19 6:16 PM, Sun, Austin wrote: > Hi Penny: > Thanks a lot your info. > Story [1] is using to track python2to3 for stx.3.0 . > Task 35794 was created for upgrade requests-toolbelt. > Task 35795 for replacing rpm_python and Task 35796 for > replacing python-smartpm replacing python-smartpm probably need a story on it's own, it will completely change the patch update process. Sau! > > [1] https://storyboard.openstack.org/#!/story/2006158 > > Thank > BR > Austin Sun. > > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: Friday, July 12, 2019 5:12 AM > To: Saul Wold ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > I think I can use this module in place of the rpm one: > https://pypi.org/project/version_utils/ > > It looks like this provides an equivalent to rpm.labelCompare that should allow me to drop rpm-python. > > > -----Original Message----- > From: Penney, Don > Sent: Thursday, July 11, 2019 3:50 PM > To: 'Saul Wold'; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. > > We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? > > I can also look at the current use of the rpm module in patching and look for alternatives. > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, July 11, 2019 3:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > > > On 7/10/19 7:03 AM, Sun, Austin wrote: >> Hi All: >> The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 >> Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. >> As sync in non-OpenStack distro meeting. >> We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . >> So below 6 packages are coming 3rd party which might be not python2to3 compliance. >> >> Package | who is using >> openvswitch | ovs >> python-cephfs | ceph >> python-smartpm | standalone package >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller > > Can you identify replacement python3 packages for any of these. > > I know we found out that smartpm is used for the patch process, I know > that smartpm is also an older project that does not have any upstream > support any further, so that will require a fair amount of work. > > Sau! > >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin [mailto:austin.sun at intel.com] >> Sent: Wednesday, July 10, 2019 4:03 PM >> To: Xie, Cindy ; Hu, Yong ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Hi All: >> New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. >> There are 11 packages not from centos but including python and may not be compatiable python2 and python3. >> Package | who is using >> openvswitch | ovs >> python-aniso8601 | keystone >> python-cephclient | ceph >> python-cephfs | ceph >> python-django-bash-completion | sysinv >> python-smartpm | standalone package >> python-unittest2 | sysinv >> python-XStatic-jquery-ui | stx-gui >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller >> >> >> I will continue check those 11 packages . >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1808073 >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin >> Sent: Thursday, July 4, 2019 11:43 AM >> To: Xie, Cindy ; Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] Python2 -> Python3 >> >> Hi Cindy: >> Yes. we will do it and update sheet. >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Xie, Cindy [mailto:cindy.xie at intel.com] >> Sent: Thursday, July 4, 2019 11:37 AM >> To: Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Austin, >> Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: >> >> In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. >> >> Thanks. - cindy >> >> -----Original Message----- >> From: Yong Hu [mailto:yong.hu at intel.com] >> Sent: Thursday, July 4, 2019 11:25 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? >> >> In my view the best solution is to wait for CentOS 8.0 :-) >> >> >> On 03/07/2019 2:55 PM, Dean Troyer wrote: >>> On 7/3/19 4:07 PM, Saul Wold wrote: >>>> The current proposal seems to be to completely convert the base >>>> CentOS7.6 system level python to use python3, this carries a high >>>> risk factor as changing out all system-level python code could have >>>> a cascade effect on system functionality and additional dependencies. >>>> While >>> >>> Changing the distro/system Python version out from under the rest of >>> the distro seems like an enormous time sink, much less a significant >>> reliability risk. >>> >>>> A better solution would be to build python3 and the associated >>>> requirements from the existing RHEL EPEL (Extra Packages for >>>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>>> This version correctly installs in a segregated directory tree. >>> >>> We would probably want to run a significant subset of the upstream >>> OpenStack testing on this combination as it is not (AFAIK) tested there. >>>  But this is true of any runtime + distro combination that is not >>> in the fairly short list of combinations that upstream OpenStack >>> actively tests. >>> >>>> Another option would be to delay the actual python2 conversion to >>>> StarlingX 4.0, the OpenStack Train release will still support python2. >>> >>> One downside to this is it leaves us no margin to defer the change >>> again, this is our second chance as it were.  OpenStack U (as of >>> now) is likely to drop py2 support as a guarantee across-the-board. >>> >>>> There is still work that is needed beyond the conversion of the >>>> python code itself to things like RPM specfiles data and other >>>> source code (such as, C code that has #includes of python2.7). It's >>>> not clear to me how much functional testing with python3 has >>>> occurred for the flock beyond what Dean has started with devstack. >>> >>> I managed to get the fault services running on py3, sysinv fell over >>> during the dbsync in my quick post-PTG trial run.  That is as far as >>> I took it.  Anyone who wants to try can pick out the local.conf I >>> posted [0] >>> >>> dt >>> >>> [0] http://paste.openstack.org/show/753844/ >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yu.chengde at 99cloud.net Thu Dec 19 11:53:58 2019 From: yu.chengde at 99cloud.net (YuChengDe) Date: Thu, 19 Dec 2019 19:53:58 +0800 (GMT+08:00) Subject: [Starlingx-discuss] =?utf-8?q?=5Bstx=5D_Access_stx-openstack_fail?= =?utf-8?q?_after_deployed?= Message-ID: Hi, I follow the specification to deploy the starlingX 3.0. While I access StarlingX-Openstack, neither CLI nor Dashboard response failure. CLI command log list below. [sysadmin at controller-0 ~(keystone_admin)]$ openstack server list The request you have made requires authentication. (HTTP 401) (Request-ID: req-c8e5e7fd-91ef-42c2-b98d-d9b4786c20a0) specification is from https://docs.starlingx.io/deploy_install_guides/r3_release/openstack/access.html Please give me a hand. Thanks -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Thu Dec 19 12:38:33 2019 From: austin.sun at intel.com (Sun, Austin) Date: Thu, 19 Dec 2019 12:38:33 +0000 Subject: [Starlingx-discuss] [stx] Access stx-openstack fail after deployed In-Reply-To: References: Message-ID: Hi ChengDe: Since you have run ‘source /etc/platform/openrc’, then this is using for host keystone, not using container openstack. you can switch to ‘sudo su’, and then ‘export OS_CLOUD=openstack_helm’, then you can access openstack command such as (openstack endpoint list) Thanks. BR Austin Sun. From: YuChengDe Sent: Thursday, December 19, 2019 7:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [stx] Access stx-openstack fail after deployed Hi, I follow the specification to deploy the starlingX 3.0. While I access StarlingX-Openstack, neither CLI nor Dashboard response failure. CLI command log list below. [sysadmin at controller-0 ~(keystone_admin)]$ openstack server list The request you have made requires authentication. (HTTP 401) (Request-ID: req-c8e5e7fd-91ef-42c2-b98d-d9b4786c20a0) specification is from https://docs.starlingx.io/deploy_install_guides/r3_release/openstack/access.html Please give me a hand. Thanks -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Thu Dec 5 11:00:55 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Thu, 5 Dec 2019 11:00:55 +0000 Subject: [Starlingx-discuss] StarlingX Duplex BareMetal ISO Installation Automation Message-ID: Hi Team, We are planning to install StarlingX BareMetal Duplex Setups on Multiple Sites. Is there any way we can automate the Installation of StarlingX ISO from any remote/staging server? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anirudh.Gupta at hsc.com Wed Dec 11 14:56:33 2019 From: Anirudh.Gupta at hsc.com (Anirudh Gupta) Date: Wed, 11 Dec 2019 14:56:33 +0000 Subject: [Starlingx-discuss] StarlingX Duplex BareMetal ISO Installation Automation In-Reply-To: References: , Message-ID: Hi Team, Can anyone please guide us to Automate StarlingX AIO ISO Installation from any remote/staging Server as we need to select All-In-One Configuration, Graphical Mode and Standard Security Profile, then the installation of iso begins. How can we automate StarlingX ISO Installation? Regards Anirudh Gupta ________________________________ From: Anirudh Gupta Sent: Thursday, 5 December, 2019, 4:30 PM To: starlingx-discuss at lists.starlingx.io Subject: StarlingX Duplex BareMetal ISO Installation Automation Hi Team, We are planning to install StarlingX BareMetal Duplex Setups on Multiple Sites. Is there any way we can automate the Installation of StarlingX ISO from any remote/staging server? Regards Anirudh Gupta DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristal.dale at intel.com Wed Dec 4 23:21:08 2019 From: kristal.dale at intel.com (Dale, Kristal) Date: Wed, 4 Dec 2019 23:21:08 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 2019-12-04 Message-ID: <43F963BD1517044B90CF82FB2F3CA716220A7D52@ORSMSX121.amr.corp.intel.com> Hello All, Here are this week's docs team meeting minutes. Join us if you have interest in StarlingX docs! We meet Wednesdays 12:30 PST. * Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings * Our tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation Cheers, Kris Current agenda and notes 2019-12-04 * All -- reviews in progress and recent merges -- https://review.opendev.org/#/q/project:starlingx/docs * 3 merged since last week * 7 open: ? https://review.opendev.org/#/c/695627/ - how to move to new openstack version for starlingx - A couple of questions to determine if we need more info in doc (Abraham) ? https://review.opendev.org/#/c/696241/ - Add doc for k8s intel gpu device plugin under operations -A couple of questions to determine if we need more info in doc (Abraham) ? R3 branch: Lets review * https://review.opendev.org/#/c/696553/ - Update .gitreview for r/stx.3.0 - abandon * https://review.opendev.org/#/c/696925/ - Set master branch SW_VERSION to 20.01 - abandon * https://review.opendev.org/#/c/696908/ - Set SW_VERSION to 19.12 - abandon * - want to remove R3 branch * follow up with update to patching doc (KRIS/Mary) ? https://review.opendev.org/#/c/693761/ - depends on other review that has not yet merged (https://review.opendev.org/#/c/693761/1) - Do we need to prod this one forward? ? https://review.opendev.org/#/c/692202/ - First draft STX in a box, needs review (Abraham). Needs hosting of image confirmed. * Abraham and all -- bug status -- https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs * Kris -- Releases * R2.0.2 EOY - Will require release notes (no impact to R3) * R3.0 - Target release by WW51. ? R3 Stories: https://storyboard.openstack.org/#!/story/list?tags=stx.docs&tags=stx.3.0 ? R3: Target content for release: https://etherpad.openstack.org/p/stx-r3-target-content * Status review * Release notes - will need to pull these together with release team (Bruce)? * Switchover - Kris * Prep R4 docs - Kris * Requested community help - still needed! * Kris -- STX in a box: (https://storyboard.openstack.org/#!/story/2006622) * Draft in review https://review.opendev.org/#/c/692202/ * Hosting qcow2 images: Provided initial answers to Scott Little (see previous meeting notes). Current info provided by Calsoft re build instructions inadequate. **Need to drive this conversation.** - need tech support to confirm questions. (Abraham) * Kris -- Handfull of items came across discuss - potential new content. Any of these needed for R3? No * Discussion re enabling SWIFT in STX - Worth a doc once it settles out? - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007149.html ? Where? Install guides - under OpenStack in Ops (linked from Install) (Log story with details - https://storyboard.openstack.org/#!/story/2006985) * App generation tool - new doc! - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007160.html (Log story with details - https://storyboard.openstack.org/#!/story/2006986) * Additional info needed re nfv support? - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007013.html (check with Greg) * Provisioning board management control - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007178.html - Do we need additional Horizon docs? (Log story - not sure yet exactly what or where, but capture - https://storyboard.openstack.org/#!/story/2006987) * Greg/Kris -- feedback re usability of docs (from Kubecon) * flatten install landing lists * Get started/quick start (possibly reuse intro page w/ link to aio simplex, stx in a box) * Mike -- StarlingX demo videos (Linjia) - ideally want to have these linked off starlingx.io * Neat! * Intel Shanghai team is using StarlingX to host their internal CI/CD activity. 10+ servers are in the cluster created by Yan Bing. http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007171.html * Misc: * http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/006813.html * This may impact all general install instructions (making sure versions align across all needed files/packages) * Possible guide for deploying on packet.com? (AR: Greg will explore this - Abraham will help) ? https://wiki.openstack.org/wiki/StarlingX/Packet_SIG ? Explore this as content for a new guide -------------- next part -------------- An HTML attachment was scrubbed... URL: From ginting at g.ncu.edu.tw Mon Dec 9 13:10:00 2019 From: ginting at g.ncu.edu.tw (=?UTF-8?B?6buD56u26ZyG?=) Date: Mon, 9 Dec 2019 21:10:00 +0800 Subject: [Starlingx-discuss] neutron server configure didn't update Message-ID: Dear StarlingX team I have an issue about openstack network provider. I can show it in host system when I create flat network. [sysadmin at controller-0 ~(keystone_admin)]$ system datanetwork-list +--------------------------------------+---------------+--------------+------+ | uuid | name | network_type | mtu | +--------------------------------------+---------------+--------------+------+ | 20b6055a-5a5a-4789-8ceb-b43ecb1512c0 | physnet-sriov | flat | 1500 | | 25fd1e7d-1871-483f-822a-9da1e10eb31f | physnet0 | vlan | 1500 | +--------------------------------------+---------------+--------------+------+ But My neutron-server pod didnt update the config. [root at neutron-server-66dd786c49-rh4kb /]# cat /etc/neutron/plugins/ml2/ml2_conf.ini [agent] extensions = [ml2] extension_drivers = port_security mechanism_drivers = openvswitch,sriovnicswitch,l2population path_mtu = 0 physical_network_mtus = physnet0:1500 tenant_network_types = vlan,vxlan type_drivers = flat,vlan,vxlan [ml2_type_flat] flat_networks = [ml2_type_vxlan] vni_ranges = vxlan_group = [ovs_driver] vhost_user_enabled = true [securitygroup] firewall_driver = openvswitch Maybe the reason is that my host system is degraded [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | degraded | +----+--------------+-------------+----------------+-------------+--------------+ And I found out my subfunction_availability is fail and subfunction_operif disabled. It will make my controller-0 degraded? How can I fix this bug? Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From huifeng.le at intel.com Thu Dec 12 15:16:19 2019 From: huifeng.le at intel.com (Le, Huifeng) Date: Thu, 12 Dec 2019 15:16:19 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 12/12 Message-ID: <76647BD697F40748B1FA4F56DA02AA0B4D6597B3@SHSMSX104.ccr.corp.intel.com> No meeting in Dec 26, Happy New Year! Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Team Meeting Agenda/Notes - Dec 12/2019 * stx.3.0 Bugs: All open issues should try with latest daily build with train patch (ISO 20191115: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/006973.html) * stx.3.0 final compile candidate build: Dec.11 * Query: https://bugs.launchpad.net/starlingx/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=stx.networking+stx.3.0&field.tags_combinator=ALL&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search * High: ? https://bugs.launchpad.net/starlingx/+bug/1841189 ping to vm fails in live migration test - Gongjun * As per Matt, the NAT-Box is directly connected to the neutron router via an L2 switch. The rest of the topology diagram is pretty accurate. * Next steps: o Review the logs from Wendy to see if there are any ovs or neutron issues o Review the system data (which is also included in the collect data (./var/extra) ) to help replicate the setup to reproduce more closely o Huifeng will also ask Wendy to try some additional tests when the system is in the failure condition ? https://bugs.launchpad.net/bugs/1854875 neutron router external gateways unreachable on Train - Yao, Le * Close as Invalid * Medium: ? https://bugs.launchpad.net/starlingx/+bug/1821026 Containers: Resolving hostname fails within nova containers leading to config_drive VM migration failures - fupingxie * Change to high and target to fix in 3.0 * stx 3.0 features * Openstack Rebase to Train ? on track from the release meeting update * stx.4.0 Feature Proposals: https://etherpad.openstack.org/p/stx.4.0-feature-candidates * Milestones: https://docs.google.com/spreadsheets/d/1a93wt0XO0_JvajnPzQwnqFkXfdDysKVnHpbrEc17_yg/edit#gid=1107209846 ? Milestone-1 1/24/2020 ? Milestone-2 3/27/2020 ? Milestone-3 5/15/2020 ? Feature Test 5/29/2020 ? Regression Test TBD ? RC1 6/12/2020 ? Final Regression TBD ? Release 7/3/2020 * Unit Test Initiative: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-October/006710.html (To be discussed) ? Interface/Networking configuration is part of the config repo * Follow the same unit test framework already in place * Action: Review the test cases (related to networking) to understand if we can increase the coverage o One gap is related to validating the puppet configuration output is valid for the various configs ? As per Matt, the main gap here is the functional/intergration testing with OVS and OVS-DPDK * No need to re-run ovs or neutron tests as these are covered by these projects * Need to focus on unique setup for starlingx * TSN support in Kata: ? Depend on Kata container support * Containerize OVS-DPDK (cheng): propose for 4.0 ? https://review.opendev.org/#/c/694188 , ovs chart patch, this patch is in progress ? https://review.opendev.org/#/c/696437/ , import OVS-DPDK patches, cherry-pick latest ovs-dpdk feature patch from openstack-helm-infra into STX, Can we consider to upgrade the openstack-helm version in 4.0? - Check with Frank on the 4.0 plan ? Will need one more patch in openstack-helm-infra project to support ovs chart per-host overrides. As discussed in irc meeting, the community is happy to see this feature implemented * https://storyboard.openstack.org/#!/story/2006965 * IPv6 PXE boot network support: https://storyboard.openstack.org/#!/story/2006442 - Huang, XiangDong - Plan to start from Feb. * OVS collectd resource monitoring: Chenjie ? https://storyboard.openstack.org/#!/story/2002948 - Monitor other features besides interface/port * Monitor datanetwork for non openstack worker node by link monitor Marvin o Patch submitted: https://review.opendev.org/#/c/695390/, https://review.opendev.org/#/c/695391/,https://review.opendev.org/#/c/694940/, https://review.opendev.org/#/c/694927/ ? WIP to add test cases and fix puppet issues * Support SR-IOV NIC with VFIO and Netdevice VF's: Steve ? https://storyboard.openstack.org/#!/story/2006842 ? Small enhancement related to configuration -- code merged in master a couple of days * Backlog Stories * A number of networking stories exist, but have no target release. Any actions we want to take on those? * Use openstack-helm-infra calico to manipulate network settings: https://storyboard.openstack.org/#!/story/2005848 * Userspace CNI plugin backed by OVS-DPDK: https://storyboard.openstack.org/#!/story/2005207 * OVS rx multi-queue affinity: https://storyboard.openstack.org/#!/story/2002960 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristal.dale at intel.com Thu Dec 12 21:53:33 2019 From: kristal.dale at intel.com (Dale, Kristal) Date: Thu, 12 Dec 2019 21:53:33 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 2019-12-11 Message-ID: <43F963BD1517044B90CF82FB2F3CA71622A87960@ORSMSX121.amr.corp.intel.com> Hello All, Here are this week's docs team meeting minutes. Join us if you have interest in StarlingX docs! We meet Wednesdays 12:30 PST. * Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings * Our tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation Cheers, Kris Current agenda and notes 2019-12-11 * All -- reviews in progress and recent merges -- https://review.opendev.org/#/q/project:starlingx/docs * 12 merged since last week * 6 open: ? https://review.opendev.org/#/c/698109/ - docs cutover to R3 - pending release - Bruce will notify ? https://review.opendev.org/#/c/698112/ - R3 release notes - pending release - Bruce will notify ? https://review.opendev.org/#/c/696241/ - k8s gpu plugin - pending final review - Greg (send email) ? https://review.opendev.org/#/c/698081/ - Docker reg opsdoc - pending question to author - Mary will make general update, circle back with URL if needed. ? https://review.opendev.org/#/c/695627/ - move OpenStack to new version in STX - needs tech review (Erich) ? https://review.opendev.org/#/c/693761/ - depends on other review that has not yet merged (https://review.opendev.org/#/c/693761/1) * Abraham and all -- bug status -- https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs * Kris -- Releases * R2.0.2 EOY - Will require release notes (no impact to R3) * R3.0 - Expected 12/12/19. ? R3 Stories: https://storyboard.openstack.org/#!/story/list?tags=stx.docs&tags=stx.3.0 ? R3: Target content for release: https://etherpad.openstack.org/p/stx-r3-target-content * Status review o What will/won't make release. * Release notes - https://review.opendev.org/#/c/698112/ * Switchover - Kris o Staged: https://review.opendev.org/#/c/698112/, https://review.opendev.org/#/c/698109/ * Prep R4 docs - Kris * Requested community help - still needed! * Kris -- STX in a box: (https://storyboard.openstack.org/#!/story/2006622) * Draft in review https://review.opendev.org/#/c/692202/ * Hosting qcow2 images: Provided initial answers to Scott Little (see previous meeting notes). Current info provided by Calsoft re build instructions inadequate. ? Questions sent re build instructions ? Need tech support to confirm questions. (Abraham) * Greg/Kris -- feedback re usability of docs (from Kubecon) * flatten install landing lists - DONE * Get started/quick start (possibly reuse intro page w/ link to aio simplex, stx in a box) - DONE * Mike/Kris -- StarlingX demo videos (Linjia) - Conversation underway, David Kinder (Intel) is engaging to help with video editing/subtitles * Kris -- potential new content. * Discussion re enabling SWIFT in STX - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007149.html ? Story logged with details - https://storyboard.openstack.org/#!/story/2006985 * App generation tool - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007160.html ? Story logged with details - https://storyboard.openstack.org/#!/story/2006986 * Additional info needed re nfv support? - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007013.html (check with Greg) * Provisioning board management control - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007178.html - additional Horizon docs ? Story logged - not sure yet exactly what or where - https://storyboard.openstack.org/#!/story/2006987 * Intel Shanghai team is using StarlingX to host their internal CI/CD activity. 10+ servers are in the cluster created by Yan Bing. http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007171.html. Possible use case example for docs? * Possible guide for deploying on packet.com? (AR: Greg will explore this - Abraham will help) ? https://wiki.openstack.org/wiki/StarlingX/Packet_SIG ? Explore this as content for a new guide * Kris -- Updating Contributor guide -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Dec 18 15:00:38 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 18 Dec 2019 15:00:38 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX non-OpenStack distro meeting Message-ID: Hi All: New Series of non-Openstack disto meeting invitation , Agenda for each will be sent individual email. * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4317 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT76911 1.jpg Type: image/jpeg Size: 8269 bytes Desc: ATT76911 1.jpg URL: From austin.sun at intel.com Wed Dec 18 15:01:09 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 18 Dec 2019 15:01:09 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX non-OpenStack distro meeting Message-ID: Hi All: New Series of non-Openstack disto meeting invitation , Agenda for each will be sent individual email. * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4196 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT49616 1.jpg Type: image/jpeg Size: 8269 bytes Desc: ATT49616 1.jpg URL: From y639chen at edu.uwaterloo.ca Wed Dec 18 15:31:14 2019 From: y639chen at edu.uwaterloo.ca (Yuechuan Chen) Date: Wed, 18 Dec 2019 15:31:14 +0000 Subject: [Starlingx-discuss] Collect Filesystem is Now Available Message-ID: Hello Everyone, I’ve developed the Collect Filesystem tool which is now available for the StarlingX community. You can use this tool to store large collect files when reporting bugs via Launchpad. Simply use the tool to upload a file and then add its URL location into the Launchpad bug. You no longer need to split up large collect files and attach multiple files to your bugs. It uses Flask as its framework and OpenID for user identification. The current address is https://files.starlingx.kube.cengn.ca/ To register, key in your OpenID at the sign in page. This id can be found in your Launchpad profile. [cid:a58a1e0f-a919-45e2-8ef6-b53b9048ab01] [cid:a1c4a3da-be4b-4fc1-bf42-75a8838a567f] This will take you to the Ubuntu One login page, use your own credentials for that. As this is the first time you sign in, you will be asked to create your profile. [cid:e330d459-df6f-4315-9e49-3a23759bee4a] As the registration complete, you now have access to all features in the Collect Filesystem. Use the navigation menu on the left to use those features. The profile page contains your user information that can be modified any time. You can delete your profile if you wish, but be warned that you will loose all the files you uploaded and it will not be recoverable. The upload page allows you to upload and store your files on the server. A StarlingX Launchpad need to be specified, and the application only accepts valid collect log files. The personal files and public files pages list the files uploaded by yourself and by the entire community. You have the options to rename, delete and change the Launchpad of your own log files. The launchpads page let users find files by launchpads for which users have previously uploaded collect logs. You will be able to search through the launchpads by their title or id. You can also download the files under a specific Launchpad as a compressed file. Thank you everyone, I hope you like this tool. Please let myself and Al Bailey know if you have any suggestions or feedback to improve this tool. Sincerely, Nathan Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 16449 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 10943 bytes Desc: image.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 15608 bytes Desc: image.png URL: From bruce.e.jones at intel.com Wed Dec 18 15:55:28 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 18 Dec 2019 15:55:28 +0000 Subject: [Starlingx-discuss] StarlingX community meeting Dec 18 2019 Message-ID: <9A85D2917C58154C960D95352B22818BED390AED@fmsmsx123.amr.corp.intel.com> Next Call: December 18 * Logistics * Bruce will lead this call due to Bill's absence. * This will be the last Community call of the year due to the holidays. We will meet again on Jan 8th. * date/time: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20191211T1500 * Standing Topics ? Sanity: any RED since last week? o none ? Unanswered Requests for Help on Mailing List * Docs team has some open requests for help with the documentation - Ghada to help round up volunteers :) ? Reviews needing input * Saul - thank you for the on-going reviews of the CentOS changes. * https://review.opendev.org/#/c/696035/1 --- changed priority to High so it'll get cherry-picked * https://review.opendev.org/#/c/688320/3 --- change to High as well * https://review.opendev.org/#/c/692276/ --- not critical for today, but should be cherry-picked once it's merged (to 3.0 and 2.0) * https://review.opendev.org/#/c/695627/ - "how to upversion openstack" doc needs review * Supporting users - we are (hopefully) about to see an influx of new users who will need support. * Improve starlingx.io - * IRC presence * Other ideas? * Upcoming maintenance Releases (Ghada) * stx.2.0.2 ? Planning a maintenance release stx.2.0.2 in the new year to provide fixes for a number of CVEs already fixed in master/stx.3.0 * https://bugs.launchpad.net/starlingx/+bug/1849197 - ntp * https://bugs.launchpad.net/starlingx/+bug/1849195 & https://bugs.launchpad.net/starlingx/+bug/1849203 - ruby * https://bugs.launchpad.net/starlingx/+bug/1849210 - wget * https://bugs.launchpad.net/starlingx/+bug/1849201 - elfutils * https://bugs.launchpad.net/starlingx/+bug/1849202 - polkit * https://bugs.launchpad.net/starlingx/+bug/1849200 - systemd * https://bugs.launchpad.net/starlingx/+bug/1852825 - OVMF (includes a pull request for stx-nova) * https://bugs.launchpad.net/starlingx/+bug/1849205 - sudo * https://bugs.launchpad.net/starlingx/+bug/1849198 & https://bugs.launchpad.net/starlingx/+bug/1849199 - libX11 * Cherrypicks for those are in progress ? Would also like to include the following: * Kernel CVEs after some soak in master -- will allow two weeks of soak after merge o https://bugs.launchpad.net/starlingx/+bug/1849209 - kernel o https://bugs.launchpad.net/starlingx/+bug/1849206 - kernel o https://bugs.launchpad.net/starlingx/+bug/1847817 - kernel (addressed implicitly as the fix is in the chosen kernel version) * kubernetes apiserver certificate needs rotation o https://bugs.launchpad.net/starlingx/+bug/1838659 * Any others? o stx.2.0 Open Bugs: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.2.0&orderby=-datecreated&start=0 o AR PLs - are there are bugs that should be on this list? Request response by Jan 8 (next community call) * stx.3.0.1 ? Release milestone achieved on Monday Dec 16. woohoo! ? The r/stx.3.0 branch is open for cherrypicking high priority bugs which are tagged for stx.3.0 for inclusion in the next maintenace release * Current count of high priority bugs: 16 * Workflow is to fix in master first, then proceed with the cherrypick * Next maintenance release schedule is TBD - likely late Jan/early Feb depending on available fixes ? Need update for the PL/TL action from the last community meeting * Request each PL to review the Medium bugs and decide if any should be changed to High priority so that they are cherrypicked in a subsequent maintenance release o AR: PLs Target Finishing this scrub by the next community call - Dec 18 o Update: ? Austin - stx.distro.other, stx.storage >> Review in progress / Target EOD ? Yong - stx.distro.openstack >> Two mediums will be upgraded to High ? Dariush - stx.config, stx.fault, stx.metal >> Review complete, no new Highs ? Frank - stx.containers >> Review complete, Mediums moved to 4.0 * Follow-Up Action: After Dec 18, Ghada will update the Medium priority bugs as follows: o Medium >= 100 days that are not reproduced recently will be marked as Low priority / no target release o Medium < 100 days OR recently reproduced will be moved to stx.4.0 * AR Review (see below) * StarlingX 2019 Year in Review * Release 2.0 ? 206 Storyboard Stories completed ? 544 Defects closed ? Key features: Kubernetes support, 95% patch removal, containerized OpenStack Stein * Release 3.0 ? 39 Storyboard Stories completed ? 178 Defects closed ? Key features: Distributed Cloud, OpenStack Train, TSN for VMs, Intel FPGA/GPU THANK YOU ALL FOR A GREAT 2019!!!!! Open Action Required (AR) Summary (see https://etherpad.openstack.org/p/stx-status-archive for archived actions) * -- -- * Dec 11, 2019 -- PLs -- follow up on 3.0 bugs per Ghada email http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007250.html ? DONE * Nov 30, 2019 -- Kris -- link to developer resource for upversioning from Stein to Train ? Dec 11 (Kris): We have a doc in review for updating OpenStack in StarlingX to Train: https://review.opendev.org/#/c/695627/. I anticipate this will be wrapped up by end of this week. ? Dec 18 Needs technical review!!! * Oct 23, 2019 -- Bill -- CalSoft Image - Bill to raise AR to find a long term home for this ? Nov 13: will look at the possibility of storing this at CENGN once we get the "big files" capability - consider the questions that Scott has about this e.g. retention policy ? Dec 18 Expecting an email today about progress on this for log files only * Sep 25, 2019 -- Doc Team -- find a place for Sai Chandu's images/documentation ? Nov 11 (Kristal): review with the draft STX in a box guide is here: https://review.opendev.org/#/c/692202/, Abraham helping with validation, still TBD re: hosting images for download (at CENGN?) ? Oct 2: We believe this will go into our StarlingX-in-a-box topic as captured in https://storyboard.openstack.org/#!/story/2006622. Bruce has volunteered to help Sai with the contribution: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-September/006316.html ? Dec 18 - doc reviews in progress, see above for large file hosting * Sep 11, 2019 -- Ildiko -- look into documenting the process of having documentation part of the development process to the contributor guide ? Oct 16: possible topic for upcoming Community meeting ? Oct 9: Release Team has added a checklist item for the feature PL @Milestone-3 that user documentation input for a given feature has been provided ? Sep 18: no update this week ? Dec 18 First pass of the doc updates have been merged, a few issues left to resolve. * July 31, 2019 -- Ada/Yang -- combine sanity runs/reports ? Nov 30: per Jose, they sorted out their Pytest issues & are now reviewing the results with Yang (more at https://etherpad.openstack.org/p/stx-status-archive) ? Dec 18 we will track this for 2020 work. The work around Ada's team deployed isn't working. * June 26, 2019 -- Yang -- arrange an automation framework info session for the Community ? Nov 13: per Ada, this will be addressed once they sort out the Pytest issues (per last AR) (more at https://etherpad.openstack.org/p/stx-status-archive) ? Dec 18 to be done in 2020 once the technical issues are sorted * June 5, 2019 --- -- provide requirements/input on Community Activity Dashboard (see notes from June 5 meeting) ? Oct 23: Thierry sent survey, see http://lists.starlingx.io/pipermail/starlingx-discuss/2019-October/006580.html ? Sep 5: Thierry added "Individual Contributor" page, see his email http://lists.starlingx.io/pipermail/starlingx-discuss/2019-September/005913.html ? see https://starlingx.biterg.io/ & https://etherpad.openstack.org/p/stx-bitergia ? Aug 14: several updates from Thierry on the stx-bitergia etherpad ? July 3: see updates on stx-bitergia etherpad ? June 19: added github repos & ansible-playbooks repo; request to see which commits a contributor has done ? Dec 18 CLOSED * May 8, 2019 -- Frank -- follow up with CENGN on storage for really large files for reference in Launchpads ? Nov 20: trial is happening now through next week (more at https://etherpad.openstack.org/p/stx-status-archive) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristal.dale at intel.com Thu Dec 19 00:16:47 2019 From: kristal.dale at intel.com (Dale, Kristal) Date: Thu, 19 Dec 2019 00:16:47 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 2019-12-18 Message-ID: <43F963BD1517044B90CF82FB2F3CA7162346BBB7@fmsmsx120.amr.corp.intel.com> Hello All, Here are this week's docs team meeting minutes. Please note that our next meeting will be January 8th, 2020. Join us if you have interest in StarlingX docs! We meet Wednesdays 12:30 PST. * Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings * Our tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation Cheers, Kris Current agenda and notes 2019-18-11 * All -- reviews in progress and recent merges -- https://review.opendev.org/#/q/project:starlingx/docs * 14 merged since last week * 4 open: ? https://review.opendev.org/#/c/695627/ - Move to new OpenStack - pending tech review ? https://review.opendev.org/#/c/699517/ - R2.0.2 release notes - staged for release ? https://review.opendev.org/#/c/693761/ - Still pending dependency, merge conflict fixed ? https://review.opendev.org/#/c/692202/ - STX in a box - pending * Abraham and all -- bug status -- https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs * Kris -- Releases * R3.0 - released on Monday! Congrats. ? R3: Target content for release: https://etherpad.openstack.org/p/stx-r3-target-content ? We have some remaining items flagged for R3: https://storyboard.openstack.org/#!/story/list?tags=stx.docs&tags=stx.3.0 * R2.0.2 Early Q1'20 - Release notes prepped. https://review.opendev.org/#/c/699517/1 * R3.0.1 - Early Q1'20 (likely w/ R2.0.2). Will require release notes. * R4.0 - planning ? R4 install guides set up. ? Would like to engage with engineering teams once planning is finalized/more solid. ID features/updates requiring docs, and people on teams to contribute. * Greg/Kris -- semi urgent docs update -- Need to update R3 docs to provide caution re updating Kubernetes secrets (untill long term approach is implemented) * Kris will work to line up info before 12/19/19 - Bruce will be backup. * Bruce -- Improve www.starlingx.io -- https://storyboard.openstack.org/#!/story/2007029 * Get started link will be an easy change - make PR as soon as ready. * Possible idea: expanding docs Get Started section into stand alone page (Kris) * Note that the marketing call handles larger changes related to website (graphics, design, etc). https://wiki.openstack.org/wiki/Starlingx/Meetings#8am_Pacific_-_Community_Marketing_Planning_Call * Put together requirements together, then attend the marketing call (coordinate with Ildiko) (Kris will coord requirements) * Kris -- Docs team holiday coverage * Kris out Dec 20-Jan 4. Bruce/Mike will provide coverage. * This will be the last docs call of the year due to holidays. Reconvene on Jan 8th. * Kris -- Ongoing tasks: * Mike -- adding video to StarlingX.io -- in progress * Mike -- Adding article to StarlingX.io -- In progress https://01.org/blogs/vmrod25/2019/accelerating-ai-workloads-starlingx-edge-servers-using-x86-technology ? Lets do this as a blog post: https://www.starlingx.io/blog/ ? Need to write a summary (Mike), then link to article. Submit as a PR to site (Kris). ? Target - first part of January * Kris -- Updated docs contributor guide: ? Approach is to highlight the differences w/ OpenStack (we inherit/use a lot from OpenStack guides) ? Recommend: move Build STX guide to dev resources (from contribute landing) ? Contributor landing page merits a review: What is the logical split of content between contributor and developer resources? * Move https://wiki.openstack.org/wiki/StarlingX/CodeSubmissionGuidelines into docs (under contributor guides) - yes (Kris) * Kris -- STX in a box: (https://storyboard.openstack.org/#!/story/2006622) ? Draft in review https://review.opendev.org/#/c/692202/ ? Hosting qcow2 images: Updated image/build instructions provided. Needs review. (send to Scott and Saul) - will send updates shortly. * Kris -- Content questions * https://storyboard.openstack.org/#!/story/2004654 - marked invalid * https://storyboard.openstack.org/#!/story/2006370 - go in ops * Additional info needed re nfv support? - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-November/007013.html(check with Greg) * Kris -- potential new content * Rook and ceph containerization/tool to create/build container applications to run on StarlingX - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007228.html * Allow configuration of PTP master/slave interfaces - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007231.html * Install guide for distributed cloud on VM -- see http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007281.html * Intel Shanghai team is using StarlingX to host their internal CI/CD activity. 10+ servers are in the cluster created by Yan Bing. http://lists.starlingx.io/pipermail/starlingx-discuss/2019-December/007171.html. Possible use case example for docs? ? Log story? * Possible guide for deploying on packet.com? (AR: Greg will explore this - Abraham will help) ? https://wiki.openstack.org/wiki/StarlingX/Packet_SIG ? Explore this as content for a new guide ? Log story? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Thu Dec 19 14:54:28 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Thu, 19 Dec 2019 14:54:28 +0000 Subject: [Starlingx-discuss] Partition operations broken in last night's load (master) Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58C1BB9436@ALA-MBD.corp.ad.wrs.com> If you are using last night's load (master), partition operations will not work. For example, if you attempt to add a partition with "system host-disk-partition-add", the runtime puppet manifest will fail to apply due to an error in the manage-partitions script. There may also be issues with the query_pci_id command as well. The issue was introduced with this change: https://review.opendev.org/#/c/699750/ The change is being reverted. My apologies for any issues this caused. Bart -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Dec 19 17:19:37 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 19 Dec 2019 17:19:37 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191219 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-19 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [BLOCKED] Sanity Platform 07 TCs [BLOCKED] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [BLOCKED] Sanity Platform 07 TCs [BLOCKED] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [BLOCKED] Sanity Platform 08 TCs [BLOCKED] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [BLOCKED] Sanity Platform 09 TCs [BLOCKED] TOTAL: [ 66 TCs ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [BLOCKED] Sanity Platform 07 TCs [BLOCKED] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [BLOCKED] Sanity Platform 07 TCs [BLOCKED] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [BLOCKED] Sanity Platform 08 TCs [BLOCKED] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [BLOCKED] Sanity Platform 09 TCs [BLOCKED] TOTAL: [ 66 TCs ] "The issue was introduced with this change: https://review.opendev.org/#/c/699750/ The change is being reverted by Bart" regards Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Dec 19 18:27:35 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 19 Dec 2019 18:27:35 +0000 Subject: [Starlingx-discuss] Collect Filesystem is Now Available In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BED392506@fmsmsx123.amr.corp.intel.com> This is very cool, thank you for putting this together. I've added a brief description and link to the tool to the development process doc at: https://review.opendev.org/700048. brucej From: Yuechuan Chen [mailto:y639chen at edu.uwaterloo.ca] Sent: Wednesday, December 18, 2019 7:31 AM To: starlingx-discuss at lists.starlingx.io Cc: Miller, Frank ; Al.Bailey at windriver.com Subject: [Starlingx-discuss] Collect Filesystem is Now Available Hello Everyone, I've developed the Collect Filesystem tool which is now available for the StarlingX community. You can use this tool to store large collect files when reporting bugs via Launchpad. Simply use the tool to upload a file and then add its URL location into the Launchpad bug. You no longer need to split up large collect files and attach multiple files to your bugs. It uses Flask as its framework and OpenID for user identification. The current address is https://files.starlingx.kube.cengn.ca/ To register, key in your OpenID at the sign in page. This id can be found in your Launchpad profile. [cid:image001.png at 01D5B656.BE0E0450] [cid:image002.png at 01D5B656.BE0E0450] This will take you to the Ubuntu One login page, use your own credentials for that. As this is the first time you sign in, you will be asked to create your profile. [cid:image003.png at 01D5B656.BE0E0450] As the registration complete, you now have access to all features in the Collect Filesystem. Use the navigation menu on the left to use those features. The profile page contains your user information that can be modified any time. You can delete your profile if you wish, but be warned that you will loose all the files you uploaded and it will not be recoverable. The upload page allows you to upload and store your files on the server. A StarlingX Launchpad need to be specified, and the application only accepts valid collect log files. The personal files and public files pages list the files uploaded by yourself and by the entire community. You have the options to rename, delete and change the Launchpad of your own log files. The launchpads page let users find files by launchpads for which users have previously uploaded collect logs. You will be able to search through the launchpads by their title or id. You can also download the files under a specific Launchpad as a compressed file. Thank you everyone, I hope you like this tool. Please let myself and Al Bailey know if you have any suggestions or feedback to improve this tool. Sincerely, Nathan Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16449 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 10943 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 15608 bytes Desc: image003.png URL: From sgw at linux.intel.com Thu Dec 19 18:56:58 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 19 Dec 2019 10:56:58 -0800 Subject: [Starlingx-discuss] Collect Filesystem is Now Available In-Reply-To: References: Message-ID: Yuechuan, This looks great for launchpads Will this also allow for uploading public files not associated with a launch pad? This would be in order to store something like the work Calsoft is doing with the AIO Simplex image. Thanks On 12/18/19 7:31 AM, Yuechuan Chen wrote: > Hello Everyone, > > I’ve developed the Collect Filesystem tool which is now available for > the StarlingX community.You can use this tool to store large collect > files when reporting bugs via Launchpad.Simply use the tool to upload a > file and then add its URL location into the Launchpad bug.You no longer > need to split up large collect files and attach multiple files to your bugs. > > It uses Flask as its framework and OpenID for user identification. The > current address is https://files.starlingx.kube.cengn.ca/ > > To register, key in your OpenID at the sign in page. This id can be > found in your Launchpad profile. > > > This will take you to the Ubuntu One login page, use your own > credentials for that. As this is the first time you sign in, you will be > asked to create your profile. > > > As the registration complete, you now have access to all features in the > Collect Filesystem. Use the navigation menu on the left to use those > features. > > The profile page contains your user information that can be modified any > time. You can delete your profile if you wish, but be warned that you > will loose all the files you uploaded and it will not be recoverable. > > The upload page allows you to upload and store your files on the server. > A StarlingX Launchpad need to be specified, and the application only > accepts valid collect log files. > > The personal files and public files pages list the files uploaded by > yourself and by the entire community. You have the options to rename, > delete and change the Launchpad of your own log files. > > The launchpads page let users find files by launchpads for which users > have previously uploaded collect logs. You will be able to search > through the launchpads by their title or id. You can also download the > files under a specific Launchpad as a compressed file. > > Thank you everyone, I hope you like this tool. Please let myself and Al > Bailey know if you have any suggestions or feedback to improve this tool. > > Sincerely, > > Nathan Chen > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From sgw at linux.intel.com Thu Dec 19 19:09:03 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 19 Dec 2019 11:09:03 -0800 Subject: [Starlingx-discuss] Build Team notes 12/19 Message-ID: <33a1c5fb-78c3-69f0-a7fb-96a35dcce4f8@linux.intel.com> Dec 19, 2019 Notes 1) Consider moving the time of this meeting to include China team 2) STX-3.0 Released! - Thanks to Scott for turning the cranks 3) CENGN Access - Saul's access via Intel VPN is very slow, but off VPN is OK - Firefox works, but is also very slow via Tunnel. - Need to secure Jenkins if we expose it, o Scott will research 4) Build Layering - Scott working on it, issues with sync lst files with rpm files o Looking at Workflows. Cengn vs developer o Expect update reviews in the next couple of days 5) CentOS-8 - Manifest needs review - Other reviews pending Kernel review - We might want to look at using queue: starlingx in the .zuul.yaml o This will help serialize multi repo changes 6) PBR - Reviews pending in root, config, fault, and ansile-playbook - For non-pythonic packages setup.cfg and setup.py are needed From Don.Penney at windriver.com Thu Dec 19 19:23:13 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 19 Dec 2019 19:23:13 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC15D5829@ALA-MBD.corp.ad.wrs.com> For RPM: 1. The stringToVersion function comes from /usr/lib/python2.7/site-packages/rpmUtils/miscutils.py, which is in the "yum" package in CentOS 7. It is not available in CentOS 8, but the function itself is simple. I should be able to just clone it in patch_functions.py, with a reference to the original source, rather than pulling in another external package. 2. There is a python3-rpm package in CentOS 8 that provides the rpm.labelCompare() function, so we should be able to use that. I tried it out on a stock CentOS 8 system in python3. 3. The patch-agent was originally written using "yum", years ago, before switching to "smartpm" to align with Yocto. I should be able to find that history to use as a reference, and get the patch-agent running with "yum" again (or "dnf"). I'll need to make sure the communication between patch agent and controller stays consistent, for interoperability between versions. I should be able to work this in the master branch, so that I've got a fully running system to test with. Cheers, Don. -----Original Message----- From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Thursday, December 19, 2019 2:01 AM To: Sun, Austin; Saul Wold; Penney, Don; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) Hi Penny: Do we have the plan for below story ( python-smartpm and rpm-python) ? CentOS8 upgrade is on-going ,and since python3 is using for CentOS8 , those 2 package can not be built successfully because they don't support python3. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Sun, Austin Sent: Tuesday, July 16, 2019 2:39 PM To: Saul Wold ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 Created Story [1] for python-smartpm and [2] for rpm-python. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, July 12, 2019 10:22 AM To: Sun, Austin ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 On 7/11/19 6:16 PM, Sun, Austin wrote: > Hi Penny: > Thanks a lot your info. > Story [1] is using to track python2to3 for stx.3.0 . > Task 35794 was created for upgrade requests-toolbelt. > Task 35795 for replacing rpm_python and Task 35796 for > replacing python-smartpm replacing python-smartpm probably need a story on it's own, it will completely change the patch update process. Sau! > > [1] https://storyboard.openstack.org/#!/story/2006158 > > Thank > BR > Austin Sun. > > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: Friday, July 12, 2019 5:12 AM > To: Saul Wold ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > I think I can use this module in place of the rpm one: > https://pypi.org/project/version_utils/ > > It looks like this provides an equivalent to rpm.labelCompare that should allow me to drop rpm-python. > > > -----Original Message----- > From: Penney, Don > Sent: Thursday, July 11, 2019 3:50 PM > To: 'Saul Wold'; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. > > We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? > > I can also look at the current use of the rpm module in patching and look for alternatives. > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, July 11, 2019 3:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > > > On 7/10/19 7:03 AM, Sun, Austin wrote: >> Hi All: >> The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 >> Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. >> As sync in non-OpenStack distro meeting. >> We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . >> So below 6 packages are coming 3rd party which might be not python2to3 compliance. >> >> Package | who is using >> openvswitch | ovs >> python-cephfs | ceph >> python-smartpm | standalone package >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller > > Can you identify replacement python3 packages for any of these. > > I know we found out that smartpm is used for the patch process, I know > that smartpm is also an older project that does not have any upstream > support any further, so that will require a fair amount of work. > > Sau! > >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin [mailto:austin.sun at intel.com] >> Sent: Wednesday, July 10, 2019 4:03 PM >> To: Xie, Cindy ; Hu, Yong ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Hi All: >> New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. >> There are 11 packages not from centos but including python and may not be compatiable python2 and python3. >> Package | who is using >> openvswitch | ovs >> python-aniso8601 | keystone >> python-cephclient | ceph >> python-cephfs | ceph >> python-django-bash-completion | sysinv >> python-smartpm | standalone package >> python-unittest2 | sysinv >> python-XStatic-jquery-ui | stx-gui >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller >> >> >> I will continue check those 11 packages . >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1808073 >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin >> Sent: Thursday, July 4, 2019 11:43 AM >> To: Xie, Cindy ; Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] Python2 -> Python3 >> >> Hi Cindy: >> Yes. we will do it and update sheet. >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Xie, Cindy [mailto:cindy.xie at intel.com] >> Sent: Thursday, July 4, 2019 11:37 AM >> To: Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Austin, >> Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: >> >> In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. >> >> Thanks. - cindy >> >> -----Original Message----- >> From: Yong Hu [mailto:yong.hu at intel.com] >> Sent: Thursday, July 4, 2019 11:25 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? >> >> In my view the best solution is to wait for CentOS 8.0 :-) >> >> >> On 03/07/2019 2:55 PM, Dean Troyer wrote: >>> On 7/3/19 4:07 PM, Saul Wold wrote: >>>> The current proposal seems to be to completely convert the base >>>> CentOS7.6 system level python to use python3, this carries a high >>>> risk factor as changing out all system-level python code could have >>>> a cascade effect on system functionality and additional dependencies. >>>> While >>> >>> Changing the distro/system Python version out from under the rest of >>> the distro seems like an enormous time sink, much less a significant >>> reliability risk. >>> >>>> A better solution would be to build python3 and the associated >>>> requirements from the existing RHEL EPEL (Extra Packages for >>>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>>> This version correctly installs in a segregated directory tree. >>> >>> We would probably want to run a significant subset of the upstream >>> OpenStack testing on this combination as it is not (AFAIK) tested there. >>>  But this is true of any runtime + distro combination that is not >>> in the fairly short list of combinations that upstream OpenStack >>> actively tests. >>> >>>> Another option would be to delay the actual python2 conversion to >>>> StarlingX 4.0, the OpenStack Train release will still support python2. >>> >>> One downside to this is it leaves us no margin to defer the change >>> again, this is our second chance as it were.  OpenStack U (as of >>> now) is likely to drop py2 support as a guarantee across-the-board. >>> >>>> There is still work that is needed beyond the conversion of the >>>> python code itself to things like RPM specfiles data and other >>>> source code (such as, C code that has #includes of python2.7). It's >>>> not clear to me how much functional testing with python3 has >>>> occurred for the flock beyond what Dean has started with devstack. >>> >>> I managed to get the fault services running on py3, sysinv fell over >>> during the dbsync in my quick post-PTG trial run.  That is as far as >>> I took it.  Anyone who wants to try can pick out the local.conf I >>> posted [0] >>> >>> dt >>> >>> [0] http://paste.openstack.org/show/753844/ >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Thu Dec 19 19:29:34 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 19 Dec 2019 19:29:34 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC15D5829@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FC15D5829@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC15D583C@ALA-MBD.corp.ad.wrs.com> Actually, for #1, I've already got a function that does something similar to stringToVersion, for parsing a version from an RPM filename using a regular expression. I'll just use that as the basis for a stringToVersion replacement, rather than cloning the code from "yum". -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, December 19, 2019 2:23 PM To: Sun, Austin; Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) For RPM: 1. The stringToVersion function comes from /usr/lib/python2.7/site-packages/rpmUtils/miscutils.py, which is in the "yum" package in CentOS 7. It is not available in CentOS 8, but the function itself is simple. I should be able to just clone it in patch_functions.py, with a reference to the original source, rather than pulling in another external package. 2. There is a python3-rpm package in CentOS 8 that provides the rpm.labelCompare() function, so we should be able to use that. I tried it out on a stock CentOS 8 system in python3. 3. The patch-agent was originally written using "yum", years ago, before switching to "smartpm" to align with Yocto. I should be able to find that history to use as a reference, and get the patch-agent running with "yum" again (or "dnf"). I'll need to make sure the communication between patch agent and controller stays consistent, for interoperability between versions. I should be able to work this in the master branch, so that I've got a fully running system to test with. Cheers, Don. -----Original Message----- From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Thursday, December 19, 2019 2:01 AM To: Sun, Austin; Saul Wold; Penney, Don; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) Hi Penny: Do we have the plan for below story ( python-smartpm and rpm-python) ? CentOS8 upgrade is on-going ,and since python3 is using for CentOS8 , those 2 package can not be built successfully because they don't support python3. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Sun, Austin Sent: Tuesday, July 16, 2019 2:39 PM To: Saul Wold ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 Created Story [1] for python-smartpm and [2] for rpm-python. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, July 12, 2019 10:22 AM To: Sun, Austin ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 On 7/11/19 6:16 PM, Sun, Austin wrote: > Hi Penny: > Thanks a lot your info. > Story [1] is using to track python2to3 for stx.3.0 . > Task 35794 was created for upgrade requests-toolbelt. > Task 35795 for replacing rpm_python and Task 35796 for > replacing python-smartpm replacing python-smartpm probably need a story on it's own, it will completely change the patch update process. Sau! > > [1] https://storyboard.openstack.org/#!/story/2006158 > > Thank > BR > Austin Sun. > > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: Friday, July 12, 2019 5:12 AM > To: Saul Wold ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > I think I can use this module in place of the rpm one: > https://pypi.org/project/version_utils/ > > It looks like this provides an equivalent to rpm.labelCompare that should allow me to drop rpm-python. > > > -----Original Message----- > From: Penney, Don > Sent: Thursday, July 11, 2019 3:50 PM > To: 'Saul Wold'; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. > > We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? > > I can also look at the current use of the rpm module in patching and look for alternatives. > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, July 11, 2019 3:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > > > On 7/10/19 7:03 AM, Sun, Austin wrote: >> Hi All: >> The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 >> Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. >> As sync in non-OpenStack distro meeting. >> We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . >> So below 6 packages are coming 3rd party which might be not python2to3 compliance. >> >> Package | who is using >> openvswitch | ovs >> python-cephfs | ceph >> python-smartpm | standalone package >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller > > Can you identify replacement python3 packages for any of these. > > I know we found out that smartpm is used for the patch process, I know > that smartpm is also an older project that does not have any upstream > support any further, so that will require a fair amount of work. > > Sau! > >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin [mailto:austin.sun at intel.com] >> Sent: Wednesday, July 10, 2019 4:03 PM >> To: Xie, Cindy ; Hu, Yong ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Hi All: >> New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. >> There are 11 packages not from centos but including python and may not be compatiable python2 and python3. >> Package | who is using >> openvswitch | ovs >> python-aniso8601 | keystone >> python-cephclient | ceph >> python-cephfs | ceph >> python-django-bash-completion | sysinv >> python-smartpm | standalone package >> python-unittest2 | sysinv >> python-XStatic-jquery-ui | stx-gui >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller >> >> >> I will continue check those 11 packages . >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1808073 >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin >> Sent: Thursday, July 4, 2019 11:43 AM >> To: Xie, Cindy ; Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] Python2 -> Python3 >> >> Hi Cindy: >> Yes. we will do it and update sheet. >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Xie, Cindy [mailto:cindy.xie at intel.com] >> Sent: Thursday, July 4, 2019 11:37 AM >> To: Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Austin, >> Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: >> >> In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. >> >> Thanks. - cindy >> >> -----Original Message----- >> From: Yong Hu [mailto:yong.hu at intel.com] >> Sent: Thursday, July 4, 2019 11:25 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? >> >> In my view the best solution is to wait for CentOS 8.0 :-) >> >> >> On 03/07/2019 2:55 PM, Dean Troyer wrote: >>> On 7/3/19 4:07 PM, Saul Wold wrote: >>>> The current proposal seems to be to completely convert the base >>>> CentOS7.6 system level python to use python3, this carries a high >>>> risk factor as changing out all system-level python code could have >>>> a cascade effect on system functionality and additional dependencies. >>>> While >>> >>> Changing the distro/system Python version out from under the rest of >>> the distro seems like an enormous time sink, much less a significant >>> reliability risk. >>> >>>> A better solution would be to build python3 and the associated >>>> requirements from the existing RHEL EPEL (Extra Packages for >>>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>>> This version correctly installs in a segregated directory tree. >>> >>> We would probably want to run a significant subset of the upstream >>> OpenStack testing on this combination as it is not (AFAIK) tested there. >>>  But this is true of any runtime + distro combination that is not >>> in the fairly short list of combinations that upstream OpenStack >>> actively tests. >>> >>>> Another option would be to delay the actual python2 conversion to >>>> StarlingX 4.0, the OpenStack Train release will still support python2. >>> >>> One downside to this is it leaves us no margin to defer the change >>> again, this is our second chance as it were.  OpenStack U (as of >>> now) is likely to drop py2 support as a guarantee across-the-board. >>> >>>> There is still work that is needed beyond the conversion of the >>>> python code itself to things like RPM specfiles data and other >>>> source code (such as, C code that has #includes of python2.7). It's >>>> not clear to me how much functional testing with python3 has >>>> occurred for the flock beyond what Dean has started with devstack. >>> >>> I managed to get the fault services running on py3, sysinv fell over >>> during the dbsync in my quick post-PTG trial run.  That is as far as >>> I took it.  Anyone who wants to try can pick out the local.conf I >>> posted [0] >>> >>> dt >>> >>> [0] http://paste.openstack.org/show/753844/ >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From austin.sun at intel.com Fri Dec 20 01:18:07 2019 From: austin.sun at intel.com (Sun, Austin) Date: Fri, 20 Dec 2019 01:18:07 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC15D583C@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FC15D5829@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FC15D583C@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Penny: Cool. Thanks you. if you have patches available on master , just let me know. Thanks. BR Austin Sun. -----Original Message----- From: Penney, Don Sent: Friday, December 20, 2019 3:30 AM To: Penney, Don ; Sun, Austin ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) Actually, for #1, I've already got a function that does something similar to stringToVersion, for parsing a version from an RPM filename using a regular expression. I'll just use that as the basis for a stringToVersion replacement, rather than cloning the code from "yum". -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, December 19, 2019 2:23 PM To: Sun, Austin; Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) For RPM: 1. The stringToVersion function comes from /usr/lib/python2.7/site-packages/rpmUtils/miscutils.py, which is in the "yum" package in CentOS 7. It is not available in CentOS 8, but the function itself is simple. I should be able to just clone it in patch_functions.py, with a reference to the original source, rather than pulling in another external package. 2. There is a python3-rpm package in CentOS 8 that provides the rpm.labelCompare() function, so we should be able to use that. I tried it out on a stock CentOS 8 system in python3. 3. The patch-agent was originally written using "yum", years ago, before switching to "smartpm" to align with Yocto. I should be able to find that history to use as a reference, and get the patch-agent running with "yum" again (or "dnf"). I'll need to make sure the communication between patch agent and controller stays consistent, for interoperability between versions. I should be able to work this in the master branch, so that I've got a fully running system to test with. Cheers, Don. -----Original Message----- From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Thursday, December 19, 2019 2:01 AM To: Sun, Austin; Saul Wold; Penney, Don; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) Hi Penny: Do we have the plan for below story ( python-smartpm and rpm-python) ? CentOS8 upgrade is on-going ,and since python3 is using for CentOS8 , those 2 package can not be built successfully because they don't support python3. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Sun, Austin Sent: Tuesday, July 16, 2019 2:39 PM To: Saul Wold ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 Created Story [1] for python-smartpm and [2] for rpm-python. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, July 12, 2019 10:22 AM To: Sun, Austin ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 On 7/11/19 6:16 PM, Sun, Austin wrote: > Hi Penny: > Thanks a lot your info. > Story [1] is using to track python2to3 for stx.3.0 . > Task 35794 was created for upgrade requests-toolbelt. > Task 35795 for replacing rpm_python and Task 35796 for > replacing python-smartpm replacing python-smartpm probably need a story on it's own, it will completely change the patch update process. Sau! > > [1] https://storyboard.openstack.org/#!/story/2006158 > > Thank > BR > Austin Sun. > > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: Friday, July 12, 2019 5:12 AM > To: Saul Wold ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > I think I can use this module in place of the rpm one: > https://pypi.org/project/version_utils/ > > It looks like this provides an equivalent to rpm.labelCompare that should allow me to drop rpm-python. > > > -----Original Message----- > From: Penney, Don > Sent: Thursday, July 11, 2019 3:50 PM > To: 'Saul Wold'; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. > > We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? > > I can also look at the current use of the rpm module in patching and look for alternatives. > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, July 11, 2019 3:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > > > On 7/10/19 7:03 AM, Sun, Austin wrote: >> Hi All: >> The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 >> Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. >> As sync in non-OpenStack distro meeting. >> We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . >> So below 6 packages are coming 3rd party which might be not python2to3 compliance. >> >> Package | who is using >> openvswitch | ovs >> python-cephfs | ceph >> python-smartpm | standalone package >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller > > Can you identify replacement python3 packages for any of these. > > I know we found out that smartpm is used for the patch process, I know > that smartpm is also an older project that does not have any upstream > support any further, so that will require a fair amount of work. > > Sau! > >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin [mailto:austin.sun at intel.com] >> Sent: Wednesday, July 10, 2019 4:03 PM >> To: Xie, Cindy ; Hu, Yong ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Hi All: >> New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. >> There are 11 packages not from centos but including python and may not be compatiable python2 and python3. >> Package | who is using >> openvswitch | ovs >> python-aniso8601 | keystone >> python-cephclient | ceph >> python-cephfs | ceph >> python-django-bash-completion | sysinv >> python-smartpm | standalone package >> python-unittest2 | sysinv >> python-XStatic-jquery-ui | stx-gui >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller >> >> >> I will continue check those 11 packages . >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1808073 >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin >> Sent: Thursday, July 4, 2019 11:43 AM >> To: Xie, Cindy ; Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] Python2 -> Python3 >> >> Hi Cindy: >> Yes. we will do it and update sheet. >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Xie, Cindy [mailto:cindy.xie at intel.com] >> Sent: Thursday, July 4, 2019 11:37 AM >> To: Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Austin, >> Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: >> >> In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. >> >> Thanks. - cindy >> >> -----Original Message----- >> From: Yong Hu [mailto:yong.hu at intel.com] >> Sent: Thursday, July 4, 2019 11:25 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? >> >> In my view the best solution is to wait for CentOS 8.0 :-) >> >> >> On 03/07/2019 2:55 PM, Dean Troyer wrote: >>> On 7/3/19 4:07 PM, Saul Wold wrote: >>>> The current proposal seems to be to completely convert the base >>>> CentOS7.6 system level python to use python3, this carries a high >>>> risk factor as changing out all system-level python code could have >>>> a cascade effect on system functionality and additional dependencies. >>>> While >>> >>> Changing the distro/system Python version out from under the rest of >>> the distro seems like an enormous time sink, much less a significant >>> reliability risk. >>> >>>> A better solution would be to build python3 and the associated >>>> requirements from the existing RHEL EPEL (Extra Packages for >>>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>>> This version correctly installs in a segregated directory tree. >>> >>> We would probably want to run a significant subset of the upstream >>> OpenStack testing on this combination as it is not (AFAIK) tested there. >>>  But this is true of any runtime + distro combination that is not >>> in the fairly short list of combinations that upstream OpenStack >>> actively tests. >>> >>>> Another option would be to delay the actual python2 conversion to >>>> StarlingX 4.0, the OpenStack Train release will still support python2. >>> >>> One downside to this is it leaves us no margin to defer the change >>> again, this is our second chance as it were.  OpenStack U (as of >>> now) is likely to drop py2 support as a guarantee across-the-board. >>> >>>> There is still work that is needed beyond the conversion of the >>>> python code itself to things like RPM specfiles data and other >>>> source code (such as, C code that has #includes of python2.7). It's >>>> not clear to me how much functional testing with python3 has >>>> occurred for the flock beyond what Dean has started with devstack. >>> >>> I managed to get the fault services running on py3, sysinv fell over >>> during the dbsync in my quick post-PTG trial run.  That is as far as >>> I took it.  Anyone who wants to try can pick out the local.conf I >>> posted [0] >>> >>> dt >>> >>> [0] http://paste.openstack.org/show/753844/ >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yong.hu at intel.com Fri Dec 20 02:10:26 2019 From: yong.hu at intel.com (Yong Hu) Date: Fri, 20 Dec 2019 10:10:26 +0800 Subject: [Starlingx-discuss] Collect Filesystem is Now Available In-Reply-To: References: Message-ID: <5f2a1cc7-a367-7f42-9e32-0f83beea1aa4@intel.com> Hi Nathan, It is a really helpful tool! Thanks! A quick question: how long will this online system keep running online? :-) Or in another word Will it be always available along with Lauchpads? I signed up in this system and but I notice On "LaunchPads" panel, it shows only one LP under my name. Actually there should be more than one in "https://bugs.launchpad.net". Do you only list those LPs for which users have ever uploaded logs here? regards, Yong On 2019/12/18 11:31 PM, Yuechuan Chen wrote: > Hello Everyone, > > I’ve developed the Collect Filesystem tool which is now available for > the StarlingX community.You can use this tool to store large collect > files when reporting bugs via Launchpad.Simply use the tool to upload a > file and then add its URL location into the Launchpad bug.You no longer > need to split up large collect files and attach multiple files to your bugs. > > It uses Flask as its framework and OpenID for user identification. The > current address is https://files.starlingx.kube.cengn.ca/ > > To register, key in your OpenID at the sign in page. This id can be > found in your Launchpad profile. > > > This will take you to the Ubuntu One login page, use your own > credentials for that. As this is the first time you sign in, you will be > asked to create your profile. > > > As the registration complete, you now have access to all features in the > Collect Filesystem. Use the navigation menu on the left to use those > features. > > The profile page contains your user information that can be modified any > time. You can delete your profile if you wish, but be warned that you > will loose all the files you uploaded and it will not be recoverable. > > The upload page allows you to upload and store your files on the server. > A StarlingX Launchpad need to be specified, and the application only > accepts valid collect log files. > > The personal files and public files pages list the files uploaded by > yourself and by the entire community. You have the options to rename, > delete and change the Launchpad of your own log files. > > The launchpads page let users find files by launchpads for which users > have previously uploaded collect logs. You will be able to search > through the launchpads by their title or id. You can also download the > files under a specific Launchpad as a compressed file. > > Thank you everyone, I hope you like this tool. Please let myself and Al > Bailey know if you have any suggestions or feedback to improve this tool. > > Sincerely, > > Nathan Chen > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From cristopher.j.lemus.contreras at intel.com Fri Dec 20 18:56:45 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Fri, 20 Dec 2019 18:56:45 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191220 Message-ID: <4B4FC8A2-4003-4BC8-90B4-DAB56A54A197@intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-20 (link) Status: RED =========================================== Sanity Test executed in Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [BLOCKED] Sanity Platform 09 TCs [BLOCKED] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed in Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [BLOCKED] Sanity Platform 09 TCs [BLOCKED] TOTAL: [ 66 TCs ] Both Standard with Dedicated Storage configurations (virtual and baremetal) failed to provision for a bug with a review in progress: https://bugs.launchpad.net/starlingx/+bug/1856078 Thanks & Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Sat Dec 21 09:09:11 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Sat, 21 Dec 2019 03:09:11 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <3cea8d$7dd6ol@fmsmga002.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191221T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From cristopher.j.lemus.contreras at intel.com Sun Dec 22 09:09:01 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Sun, 22 Dec 2019 03:09:01 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <029d15$69f3ia@orsmga008.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191222T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From haochuan.z.chen at intel.com Mon Dec 23 07:16:37 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 23 Dec 2019 07:16:37 +0000 Subject: [Starlingx-discuss] question about ceph or storage In-Reply-To: <4C60D9C5C8176C47874FFF36647AA19EA4747683@ALA-MBD.corp.ad.wrs.com> References: <56829C2A36C2E542B0CCB9854828E4D85628DE96@CDSMSX102.ccr.corp.intel.com> <4C60D9C5C8176C47874FFF36647AA19EA4747683@ALA-MBD.corp.ad.wrs.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628F3EA@CDSMSX102.ccr.corp.intel.com> Hi Ovidiu For "system storage-backend-add -c ", user could deploy another ceph cluster, and add as storage backend, correctly. And this is this command's intention, correct? Martin, Chen SSP, Software Engineer 021-61164330 From: Poncea, Ovidiu Sent: Wednesday, December 18, 2019 7:47 PM To: Chen, Haochuan Z ; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Hi Chen, see inline. ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Tuesday, December 17, 2019 10:46 AM To: Church, Robert; Poncea, Ovidiu Cc: 'starlingx-discuss at lists.starlingx.io' Subject: question about ceph or storage Hi Bob & Ovidiu Some question about ceph or storage. 1, What's storage tier and storage profile? What's the [Ovi] Storage tiering is equivalent with this: https://ceph.io/planet/deploying-ceph-with-storage-tiering/ . [Ovi] Profiles are managed by system storprofile-* and system host-apply-profile and are used to copy configuration from one node to another, identical node on initial provisioning. These profiles are only in system inventory, there is no Ceph equivalent. 2, why for duplex it request such puppet class dependency in ceph.pp? Is this request make all drbd config before class ceph? [Ovi] ceph-mon in AIO-DX is DRBD managed and it has a single, floating, monitor. On DX, when you swact, the monitor is stopped on the active controller and started on the standby controller. Drbd::Resource <| |> -> Class['::ceph'] And flag file ".node_ceph_configured", to inform drbd make init setup before ceph config? 3, To launch ceph-mon, create a logical volume "ceph-mon-lv" and mount to /var/lib/ceph/mon, not directly mkdir "/var/lib/ceph/mon" for ceph-mon [Ovi] Ceph monitors have their own logical volume. They are managed through "system ceph-mon*" commands. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, December 11, 2019 4:46 PM To: Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu >; Qi, Mingyuan > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob Some question, what's storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor See inline... From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it's potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don't think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don't know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Mon Dec 23 07:18:35 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 23 Dec 2019 07:18:35 +0000 Subject: [Starlingx-discuss] question about ceph or storage In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D85628F3EA@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D85628DE96@CDSMSX102.ccr.corp.intel.com> <4C60D9C5C8176C47874FFF36647AA19EA4747683@ALA-MBD.corp.ad.wrs.com> <56829C2A36C2E542B0CCB9854828E4D85628F3EA@CDSMSX102.ccr.corp.intel.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D85628F402@CDSMSX102.ccr.corp.intel.com> Sorry to disturb! Merry Christmas to all StarlingX member! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Monday, December 23, 2019 3:17 PM To: Poncea, Ovidiu ; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Hi Ovidiu For "system storage-backend-add -c ", user could deploy another ceph cluster, and add as storage backend, correctly. And this is this command's intention, correct? Martin, Chen SSP, Software Engineer 021-61164330 From: Poncea, Ovidiu > Sent: Wednesday, December 18, 2019 7:47 PM To: Chen, Haochuan Z >; Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: question about ceph or storage Hi Chen, see inline. ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Tuesday, December 17, 2019 10:46 AM To: Church, Robert; Poncea, Ovidiu Cc: 'starlingx-discuss at lists.starlingx.io' Subject: question about ceph or storage Hi Bob & Ovidiu Some question about ceph or storage. 1, What's storage tier and storage profile? What's the [Ovi] Storage tiering is equivalent with this: https://ceph.io/planet/deploying-ceph-with-storage-tiering/ . [Ovi] Profiles are managed by system storprofile-* and system host-apply-profile and are used to copy configuration from one node to another, identical node on initial provisioning. These profiles are only in system inventory, there is no Ceph equivalent. 2, why for duplex it request such puppet class dependency in ceph.pp? Is this request make all drbd config before class ceph? [Ovi] ceph-mon in AIO-DX is DRBD managed and it has a single, floating, monitor. On DX, when you swact, the monitor is stopped on the active controller and started on the standby controller. Drbd::Resource <| |> -> Class['::ceph'] And flag file ".node_ceph_configured", to inform drbd make init setup before ceph config? 3, To launch ceph-mon, create a logical volume "ceph-mon-lv" and mount to /var/lib/ceph/mon, not directly mkdir "/var/lib/ceph/mon" for ceph-mon [Ovi] Ceph monitors have their own logical volume. They are managed through "system ceph-mon*" commands. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, December 11, 2019 4:46 PM To: Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu >; Qi, Mingyuan > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob Some question, what's storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor See inline... From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it's potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don't think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don't know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Mon Dec 23 08:29:20 2019 From: austin.sun at intel.com (Sun, Austin) Date: Mon, 23 Dec 2019 08:29:20 +0000 Subject: [Starlingx-discuss] [StarlingX] flock services branch for centos8 Message-ID: Hi Saul: As some developers are building some flock services package for centos8 , and need some modify it. would you like to create flock services centos8 branches ? Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Mon Dec 23 09:08:08 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Mon, 23 Dec 2019 03:08:08 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: List of docker images required for "platform-integ-apps": BUILD_ID="20191223T000000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Ian.Jolliffe at windriver.com Mon Dec 23 15:41:02 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Mon, 23 Dec 2019 15:41:02 +0000 Subject: [Starlingx-discuss] [TSC] Minutes 12/19 Message-ID: We had a short meeting this week. Here are the major outcomes. Next TSC meeting will be Jan9th – this will be the first meeting of 2020. We agreed the Hack-a-thon the week of Jan 13th will move forward globally, with some collaboration resources set up to make this happen. Stay tuned for more. * Devote IRC channel - set topic to hack-a-thon * Promote within various forums to get attention * Add to agenda for community call on Jan 8th. * Ian will send a note to the mailing list. Hack-a-thon last week in Beijing: (week of Dec 9th) * 99cloud, Fibrehome, Intel, JITStack, China Unicom Wo Cloud team * Bug fixing focus, Mini meetup & Tech discussion * Here is the etherpad for this Hack-a-thon in Beijing: https://etherpad.openstack.org/p/OpenSource-Hackathon-10-Beijing On behalf of the TSC, 2019 has been a fantastic year for the StarlingX project, many thanks to all the contributor/users around the world for making this such a big success. Regards; Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Mon Dec 23 16:32:25 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Mon, 23 Dec 2019 16:32:25 +0000 Subject: [Starlingx-discuss] [Test Hack-a-thon] Week of January 13th Message-ID: <78C65FD4-E67E-4EC6-8EA6-4F95A652AA54@windriver.com> Hi all; Many in the community have been talking about how to improve our unit and system test coverage. We have discussed this at PTG’s, Summits and other events. We have been taking small steps on this journey and making decent progress. A few people have suggested a hack-a-thon model to increase our progress and focus on some key areas to improve coverage and increase our leverage of Zuul and other tools. Here are the proposed next steps: * More discussion at community meeting Jan 8th and TSC Jan 9th to finish the planning * We will devote the IRC channel to the hack-a-thon the week of the 13th. * Looking for other ideas here – open zoom channel? * We will have a set of proposed areas that we want to focus on for discussion Jan 8/9 * If people have ideas or suggestions please reply to this email. * We discussed various ways to stay in synch globally so our work doesn’t collide, but, we need to figure out the approach. * Looking for ideas and suggestions on how to make this work for everyone. Stay tuned for more info as the dates approach. Regards; Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Dec 23 18:59:39 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 23 Dec 2019 18:59:39 +0000 Subject: [Starlingx-discuss] [ Test ] meeting 12/24 and 12/31 cancelled Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CF3D508@FMSMSX114.amr.corp.intel.com> Hello, We are not holding test meetings this week and next one: very low attendance is expected due the holidays. Please send any question or comment to the mailing list. Happy Holidays! Enjoy. Ada From sgw at linux.intel.com Mon Dec 23 22:55:35 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 23 Dec 2019 14:55:35 -0800 Subject: [Starlingx-discuss] [StarlingX] flock services branch for centos8 In-Reply-To: References: Message-ID: <777cef18-8071-f59f-efe4-3674db27b7fa@linux.intel.com> On 12/23/19 12:29 AM, Sun, Austin wrote: > Hi Saul: > > As some developers are building some flock services package for centos8 > , and need some modify it.  would you like to create flock services > centos8 branches ? > Can you be more specific about which packages? What kind of modifications are needed for Centos8 and are they in conflict with the Centos7 build? More details would be good. Thanks Sau! > Thanks. > > BR > Austin Sun. > From cristopher.j.lemus.contreras at intel.com Tue Dec 24 00:01:24 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Tue, 24 Dec 2019 00:01:24 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191223 Message-ID: <4653F41B-C576-43ED-A040-C9F51CE21E35@intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-23 (link) Status: GREEN =========================================== Sanity Test executed in Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed in Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Thanks & Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Dec 24 01:11:32 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 24 Dec 2019 01:11:32 +0000 Subject: [Starlingx-discuss] [StarlingX] flock services branch for centos8 In-Reply-To: <777cef18-8071-f59f-efe4-3674db27b7fa@linux.intel.com> References: <777cef18-8071-f59f-efe4-3674db27b7fa@linux.intel.com> Message-ID: Hi Saul: Flock services spec includes "python2-pip/python2-wheel" which need be updated to python3, So we need centos8 branch: The List of Repos is: Config Distributedcloud Fault Gui Ha Integ Metal Monitoring Nfv Upstream utilities Update Thanks. BR Austin Sun. -----Original Message----- From: Saul Wold Sent: Tuesday, December 24, 2019 6:56 AM To: Sun, Austin ; starlingx-discuss at lists.starlingx.io Subject: Re: [StarlingX] flock services branch for centos8 On 12/23/19 12:29 AM, Sun, Austin wrote: > Hi Saul: > > As some developers are building some flock services package for > centos8 , and need some modify it.  would you like to create flock > services > centos8 branches ? > Can you be more specific about which packages? What kind of modifications are needed for Centos8 and are they in conflict with the Centos7 build? More details would be good. Thanks Sau! > Thanks. > > BR > Austin Sun. > From sgw at linux.intel.com Tue Dec 24 06:01:05 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 23 Dec 2019 22:01:05 -0800 Subject: [Starlingx-discuss] [StarlingX] flock services branch for centos8 In-Reply-To: References: <777cef18-8071-f59f-efe4-3674db27b7fa@linux.intel.com> Message-ID: Hi All, I created the branches and added reviews for updated .gitreview in those branches. If there are folks around tomorrow, please provide +2 and +W as appropriate. Thanks & Happy Holidays to all! Sau! On 12/23/19 5:11 PM, Sun, Austin wrote: > Hi Saul: > Flock services spec includes "python2-pip/python2-wheel" which need be updated to python3, So we need centos8 branch: > > The List of Repos is: > > Config > Distributedcloud > Fault > Gui > Ha > Integ > Metal > Monitoring > Nfv > Upstream > utilities > Update > > Thanks. > BR > Austin Sun. > > -----Original Message----- > From: Saul Wold > Sent: Tuesday, December 24, 2019 6:56 AM > To: Sun, Austin ; starlingx-discuss at lists.starlingx.io > Subject: Re: [StarlingX] flock services branch for centos8 > > > > On 12/23/19 12:29 AM, Sun, Austin wrote: >> Hi Saul: >> >> As some developers are building some flock services package for >> centos8 , and need some modify it.  would you like to create flock >> services >> centos8 branches ? >> > Can you be more specific about which packages? What kind of modifications are needed for Centos8 and are they in conflict with the > Centos7 build? > > More details would be good. > > Thanks > Sau! > >> Thanks. >> >> BR >> Austin Sun. >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From parkeryan at tencent.com Tue Dec 24 07:26:30 2019 From: parkeryan at tencent.com (=?gb2312?B?cGFya2VyeWFuKOPG1r693Ck=?=) Date: Tue, 24 Dec 2019 07:26:30 +0000 Subject: [Starlingx-discuss] Compute node exceeded memory threshold and went degreaded state or offline state. Message-ID: Hi, folks I have deployed StarlingX 2.0 witch 2 controller nodes and 12 compute nodes, and I also installed stx-openstack application. I have some questions while deploying virtual servers with openstack. Here is the virtual servers that I have deployed, actually, it’s just part of what I wanted to deploy, because some compute nodes entered degreaded state after deploying these virtual servers. 主机名字 类型 VCPU(已用) VCPU(总计) 内存(已用) 内存(总计) 本地存储(已用) 本地存储(总计) 实例 compute-0 QEMU 10 46 19.8GB 127.9GB 50GB 265GB 5 compute-1 QEMU 10 46 19.8GB 127.9GB 50GB 265GB 5 compute-2 QEMU 12 54 21.8GB 127.9GB 60GB 265GB 6 compute-3 QEMU 8 54 17.8GB 127.9GB 40GB 265GB 4 compute-4 QEMU 10 54 19.8GB 127.9GB 50GB 265GB 5 compute-5 QEMU 8 54 17.8GB 127.9GB 40GB 265GB 4 compute-6 QEMU 12 46 21.8GB 127.9GB 60GB 265GB 6 compute-7 QEMU 14 54 23.8GB 127.9GB 70GB 265GB 7 compute-8 QEMU 8 54 17.8GB 127.9GB 40GB 265GB 4 compute-10 QEMU 6 54 15.8GB 127.9GB 30GB 265GB 3 compute-11 QEMU 12 54 21.8GB 127.9GB 60GB 265GB 6 [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 18 | controller-1 | controller | unlocked | enabled | available | | 19 | compute-0 | worker | unlocked | enabled | degraded | | 20 | compute-1 | worker | unlocked | enabled | degraded | | 21 | compute-2 | worker | unlocked | enabled | degraded | | 22 | compute-3 | worker | unlocked | enabled | available | | 23 | compute-4 | worker | unlocked | enabled | available | | 24 | compute-5 | worker | unlocked | enabled | available | | 25 | compute-6 | worker | unlocked | enabled | degraded | | 26 | compute-7 | worker | unlocked | enabled | degraded | | 27 | compute-8 | worker | unlocked | enabled | available | | 28 | compute-9 | worker | locked | disabled | online | | 29 | compute-10 | worker | unlocked | enabled | available | | 30 | compute-11 | worker | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ From the fm alarm log, I can found it was because the host memory had exceeded threshold. [sysadmin at controller-0 ~(keystone_admin)]$ fm alarm-list +----------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------+----------+-------------------------+ | Alarm ID | Reason Text | Entity ID | Severity | Time Stamp | +----------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------+----------+-------------------------+ | 100.103 | Platform Memory threshold exceeded ; threshold 90.00%, actual 98.55% | host=compute-1.numa=node1 | critical | 2019-12-24T06:56:44. | | | | | | 392673 | | | | | | | | 100.103 | Platform Memory threshold exceeded ; threshold 90.00%, actual 96.63% | host=compute-0.numa=node1 | critical | 2019-12-24T06:56:39. | | | | | | 336593 | | | | | | | | 100.103 | Platform Memory threshold exceeded ; threshold 80.00%, actual 80.70% | host=compute-4.numa=node1 | major | 2019-12-24T06:52:38. | | | | | | 287378 | | | | | | | | 100.103 | Platform Memory threshold exceeded ; threshold 90.00%, actual 98.76% | host=compute-7.numa=node1 | critical | 2019-12-24T06:39:41. | | | | | | 186485 | | | | | | | | 100.103 | Platform Memory threshold exceeded ; threshold 90.00%, actual 97.07% | host=compute-2.numa=node1 | critical | 2019-12-24T06:39:29. | | | | | | 700993 | | | | | | | | 100.103 | Platform Memory threshold exceeded ; threshold 90.00%, actual 98.77% | host=compute-6.numa=node1 | critical | 2019-12-24T06:39:15. | | | | | | 864868 | | | | | | | | 200.006 | compute-9 is degraded due to the failure of its 'pci-irq-affinity-agent' process. Auto recovery of this major process is in | host=compute-9.process=pci-irq- | major | 2019-12-23T07:00:11. | | | progress. | affinity-agent | | 942609 | | | | | | | | 200.006 | compute-9 critical 'kubelet' process has failed and could not be auto-recovered gracefully. Auto-recovery progression by host | host=compute-9.process=kubelet | critical | 2019-12-23T03:50:02. | | | reboot is required and in progress. Manual Lock and Unlock may be required if auto-recovery is unsuccessful. | | | 934247 | | | | | | | | 200.001 | compute-9 was administratively locked to take it out-of-service. | host=compute-9 | warning | 2019-12-23T03:46:04. | | | | | | 527208 | | | | | | | +----------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------+----------+-------------------------+ And my question is, who exhausted the memory, from the openstack and system command, there is a lot of available memory which can be allocated. controller-0:~/sow$ openstack host show compute-0 +-----------+----------------------------------+-----+-----------+---------+ | Host | Project | CPU | Memory MB | Disk GB | +-----------+----------------------------------+-----+-----------+---------+ | compute-0 | (total) | 46 | 130978 | 265 | | compute-0 | (used_now) | 10 | 20240 | 50 | | compute-0 | (used_max) | 10 | 10240 | 50 | | compute-0 | 943ada3993eb4e9bada5e9eac3aadeb0 | 10 | 10240 | 50 | +-----------+----------------------------------+-----+-----------+---------+ controller-0:~/sow$ openstack host show compute-1 +-----------+----------------------------------+-----+-----------+---------+ | Host | Project | CPU | Memory MB | Disk GB | +-----------+----------------------------------+-----+-----------+---------+ | compute-1 | (total) | 46 | 130978 | 265 | | compute-1 | (used_now) | 10 | 20240 | 50 | | compute-1 | (used_max) | 10 | 10240 | 50 | | compute-1 | 943ada3993eb4e9bada5e9eac3aadeb0 | 10 | 10240 | 50 | +-----------+----------------------------------+-----+-----------+---------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-memory-list compute-0 +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+ | processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_total_ | app_hp_total_2M | app_hp_avail_2M | app_hp_pending_2M | app_hp_total_1G | app_hp_avail_1G | app_hp_pending_1G | app_hp_use_1G | | | al(MiB) | rm(MiB) | il(MiB) | configured | size(M | total | avail | _reqd | 4K | | | | | | | | | | | | | | iB) | | | | | | | | | | | | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+ | 0 | 57442 | 8000 | 57442 | True | 1024 | 0 | 0 | None | 1470976 | 25848 | 25848 | None | 0 | 0 | None | True | | 1 | 63536 | 2000 | 63536 | True | 1024 | 0 | 0 | None | 1626624 | 28591 | 28591 | None | 0 | 0 | None | True | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+ [sysadmin at controller-0 ~(keystone_admin)]$ system host-memory-list compute-1 +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+ | processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_total_ | app_hp_total_2M | app_hp_avail_2M | app_hp_pending_2M | app_hp_total_1G | app_hp_avail_1G | app_hp_pending_1G | app_hp_use_1G | | | al(MiB) | rm(MiB) | il(MiB) | configured | size(M | total | avail | _reqd | 4K | | | | | | | | | | | | | | iB) | | | | | | | | | | | | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+ | 0 | 57442 | 8000 | 57442 | True | 1024 | 0 | 0 | None | 1470976 | 25848 | 25848 | None | 0 | 0 | None | True | | 1 | 63536 | 2000 | 63536 | True | 1024 | 0 | 0 | None | 1626624 | 28591 | 28591 | None | 0 | 0 | None | True | +-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+ [sysadmin at controller-0 ~(keystone_admin)]$ If I keep on deploying more virtual servers, the compute node will go offline state, and go to booting state. And such compute node may corrupt and sometimes I have to reinstall the stx-openstack application to make the compute node come back. Best regards parkeryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Tue Dec 24 09:10:20 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Tue, 24 Dec 2019 03:10:20 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <412985$65til9@orsmga007.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191224T000000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Don.Penney at windriver.com Tue Dec 24 15:34:18 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 24 Dec 2019 15:34:18 +0000 Subject: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) References: <6703202FD9FDFF4A8DA9ACF104AE129FC15D5829@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC15D698F@ALA-MBD.corp.ad.wrs.com> The first part of the work has merged, removing the dependency on the yum rpmUtils module: https://review.opendev.org/700365 I'm starting on the transition to yum next. Cheers, Don. -----Original Message----- From: Penney, Don Sent: Thursday, December 19, 2019 2:30 PM To: Penney, Don; Sun, Austin; Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) Actually, for #1, I've already got a function that does something similar to stringToVersion, for parsing a version from an RPM filename using a regular expression. I'll just use that as the basis for a stringToVersion replacement, rather than cloning the code from "yum". -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, December 19, 2019 2:23 PM To: Sun, Austin; Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) For RPM: 1. The stringToVersion function comes from /usr/lib/python2.7/site-packages/rpmUtils/miscutils.py, which is in the "yum" package in CentOS 7. It is not available in CentOS 8, but the function itself is simple. I should be able to just clone it in patch_functions.py, with a reference to the original source, rather than pulling in another external package. 2. There is a python3-rpm package in CentOS 8 that provides the rpm.labelCompare() function, so we should be able to use that. I tried it out on a stock CentOS 8 system in python3. 3. The patch-agent was originally written using "yum", years ago, before switching to "smartpm" to align with Yocto. I should be able to find that history to use as a reference, and get the patch-agent running with "yum" again (or "dnf"). I'll need to make sure the communication between patch agent and controller stays consistent, for interoperability between versions. I should be able to work this in the master branch, so that I've got a fully running system to test with. Cheers, Don. -----Original Message----- From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Thursday, December 19, 2019 2:01 AM To: Sun, Austin; Saul Wold; Penney, Don; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python2 -> Python3 (smartpm and rpm-python) Hi Penny: Do we have the plan for below story ( python-smartpm and rpm-python) ? CentOS8 upgrade is on-going ,and since python3 is using for CentOS8 , those 2 package can not be built successfully because they don't support python3. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Sun, Austin Sent: Tuesday, July 16, 2019 2:39 PM To: Saul Wold ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 Created Story [1] for python-smartpm and [2] for rpm-python. [1] https://storyboard.openstack.org/#!/story/2006227 [2] https://storyboard.openstack.org/#!/story/2006228 Thanks. BR Austin Sun. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, July 12, 2019 10:22 AM To: Sun, Austin ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python2 -> Python3 On 7/11/19 6:16 PM, Sun, Austin wrote: > Hi Penny: > Thanks a lot your info. > Story [1] is using to track python2to3 for stx.3.0 . > Task 35794 was created for upgrade requests-toolbelt. > Task 35795 for replacing rpm_python and Task 35796 for > replacing python-smartpm replacing python-smartpm probably need a story on it's own, it will completely change the patch update process. Sau! > > [1] https://storyboard.openstack.org/#!/story/2006158 > > Thank > BR > Austin Sun. > > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: Friday, July 12, 2019 5:12 AM > To: Saul Wold ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > I think I can use this module in place of the rpm one: > https://pypi.org/project/version_utils/ > > It looks like this provides an equivalent to rpm.labelCompare that should allow me to drop rpm-python. > > > -----Original Message----- > From: Penney, Don > Sent: Thursday, July 11, 2019 3:50 PM > To: 'Saul Wold'; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Python2 -> Python3 > > It probably makes sense for me to look at moving the patching framework from "smart" back to "yum" - and at the same, restructure the code so that package management is a backend. It was originally written with yum many years ago, then moved to "smart" to align with yocto. > > We can also look at upversioning requests-toolbelt - we're on an older version solely because there's never been a reason to update it. That said, the version we're using, 0.5.1, is in pypi as py2.py3, so presumably that's ok? > > I can also look at the current use of the rpm module in patching and look for alternatives. > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, July 11, 2019 3:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Python2 -> Python3 > > > > On 7/10/19 7:03 AM, Sun, Austin wrote: >> Hi All: >> The logical for here is if package from centos directly , we will wait upgrading to CentOS 8.x to compliance with python3 >> Please filter 'Do not contain centos' in Column N , then it will show below 11 packages. >> As sync in non-OpenStack distro meeting. >> We still can filter out the 4 packages from fedora project and python-cephclient as flock servers . >> So below 6 packages are coming 3rd party which might be not python2to3 compliance. >> >> Package | who is using >> openvswitch | ovs >> python-cephfs | ceph >> python-smartpm | standalone package >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller > > Can you identify replacement python3 packages for any of these. > > I know we found out that smartpm is used for the patch process, I know > that smartpm is also an older project that does not have any upstream > support any further, so that will require a fair amount of work. > > Sau! > >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin [mailto:austin.sun at intel.com] >> Sent: Wednesday, July 10, 2019 4:03 PM >> To: Xie, Cindy ; Hu, Yong ; >> 'starlingx-discuss at lists.starlingx.io' >> >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Hi All: >> New sheet was updated to [1]. Adding column N (repo_info) to record where the package coming from. >> There are 11 packages not from centos but including python and may not be compatiable python2 and python3. >> Package | who is using >> openvswitch | ovs >> python-aniso8601 | keystone >> python-cephclient | ceph >> python-cephfs | ceph >> python-django-bash-completion | sysinv >> python-smartpm | standalone package >> python-unittest2 | sysinv >> python-XStatic-jquery-ui | stx-gui >> qemu-kvm-ev | mtce-compute >> requests-toolbelt | cgcs-patch-controller >> rpm-python | cgcs-patch-controller >> >> >> I will continue check those 11 packages . >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1808073 >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Sun, Austin >> Sent: Thursday, July 4, 2019 11:43 AM >> To: Xie, Cindy ; Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] Python2 -> Python3 >> >> Hi Cindy: >> Yes. we will do it and update sheet. >> >> Thanks. >> BR >> Austin Sun. >> >> -----Original Message----- >> From: Xie, Cindy [mailto:cindy.xie at intel.com] >> Sent: Thursday, July 4, 2019 11:37 AM >> To: Hu, Yong ; >> starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> Austin, >> Can you add one more column in your xls sheet: https://bugs.launchpad.net/starlingx/+bug/1808073/+attachment/5274592/+files/rpm_python_status-stx.2.0.xlsx: >> >> In your column "I", for your "3rd party" category, to break down "CentOS package" & "3rd party package", so that we can understand how much we can rely on CentOS 8.0, especially for those "risk" ones. >> >> Thanks. - cindy >> >> -----Original Message----- >> From: Yong Hu [mailto:yong.hu at intel.com] >> Sent: Thursday, July 4, 2019 11:25 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Python2 -> Python3 >> >> A general question, do we have experience and confidence to ensure 2 python versions (both interpreters and pip libs) can co-exist and each of packages can correctly refer to them?? >> >> In my view the best solution is to wait for CentOS 8.0 :-) >> >> >> On 03/07/2019 2:55 PM, Dean Troyer wrote: >>> On 7/3/19 4:07 PM, Saul Wold wrote: >>>> The current proposal seems to be to completely convert the base >>>> CentOS7.6 system level python to use python3, this carries a high >>>> risk factor as changing out all system-level python code could have >>>> a cascade effect on system functionality and additional dependencies. >>>> While >>> >>> Changing the distro/system Python version out from under the rest of >>> the distro seems like an enormous time sink, much less a significant >>> reliability risk. >>> >>>> A better solution would be to build python3 and the associated >>>> requirements from the existing RHEL EPEL (Extra Packages for >>>> Enterprise Linux) Source RPMs repo and install them into the ISO. >>>> This version correctly installs in a segregated directory tree. >>> >>> We would probably want to run a significant subset of the upstream >>> OpenStack testing on this combination as it is not (AFAIK) tested there. >>>  But this is true of any runtime + distro combination that is not >>> in the fairly short list of combinations that upstream OpenStack >>> actively tests. >>> >>>> Another option would be to delay the actual python2 conversion to >>>> StarlingX 4.0, the OpenStack Train release will still support python2. >>> >>> One downside to this is it leaves us no margin to defer the change >>> again, this is our second chance as it were.  OpenStack U (as of >>> now) is likely to drop py2 support as a guarantee across-the-board. >>> >>>> There is still work that is needed beyond the conversion of the >>>> python code itself to things like RPM specfiles data and other >>>> source code (such as, C code that has #includes of python2.7). It's >>>> not clear to me how much functional testing with python3 has >>>> occurred for the flock beyond what Dean has started with devstack. >>> >>> I managed to get the fault services running on py3, sysinv fell over >>> during the dbsync in my quick post-PTG trial run.  That is as far as >>> I took it.  Anyone who wants to try can pick out the local.conf I >>> posted [0] >>> >>> dt >>> >>> [0] http://paste.openstack.org/show/753844/ >>> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From weifei.yu at intel.com Wed Dec 25 07:51:46 2019 From: weifei.yu at intel.com (Yu, Weifei) Date: Wed, 25 Dec 2019 07:51:46 +0000 Subject: [Starlingx-discuss] How to deploy starlingx by ipv6 configuration Message-ID: Hi All, I`m trying to deploy starlingx by ipv6 configuration, but the official guide is very simple like this. ------------------------------------------------------------------------ Note By default, StarlingX uses IPv4. To use StarlingX with IPv6: * The entire infrastructure and cluster configuration must be IPv6, with the exception of the PXE boot network. * Not all external servers are reachable via IPv6 addresses (for example Docker registries). Depending on your infrastructure, it may be necessary to deploy a NAT64/DNS64 gateway to translate the IPv4 addresses to IPv6. ------------------------------------------------------------------------ Is there any detail docs I can refer to? B&R weifei -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Wed Dec 25 09:10:50 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Wed, 25 Dec 2019 03:10:50 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <0eeb65$804raj@FMSMGA003.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191225T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From cristopher.j.lemus.contreras at intel.com Thu Dec 26 09:05:35 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Thu, 26 Dec 2019 03:05:35 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <3cea8d$7emri3@fmsmga002.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191226T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From cristopher.j.lemus.contreras at intel.com Thu Dec 26 15:45:49 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Thu, 26 Dec 2019 15:45:49 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191226 Message-ID: <67A6D7D8-70B1-4749-A88E-D9D48228F736@intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-26 (link) Status: GREEN =========================================== Sanity Test executed in Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed in Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Thanks & Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Fri Dec 27 09:10:23 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Fri, 27 Dec 2019 03:10:23 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <145d7b$cerbto@fmsmga005.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191227T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From cristopher.j.lemus.contreras at intel.com Fri Dec 27 14:42:42 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Fri, 27 Dec 2019 14:42:42 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191227 Message-ID: <6159FC59-785F-475A-8804-66FD8115E60A@intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-27 (link) Status: GREEN =========================================== Sanity Test executed in Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed in Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Thanks & Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Fri Dec 27 20:40:34 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Fri, 27 Dec 2019 20:40:34 +0000 Subject: [Starlingx-discuss] CNCF K8s conformance - Error found Message-ID: <3177D746-58C3-4BE7-8E19-2AAA6A0036A0@intel.com> Hello All, I created a launchpad to track an error found during the CNCF K8s conformance procedure: https://bugs.launchpad.net/starlingx/+bug/1857716 Conformance was executed using sonobuoy tool, instructions here: https://github.com/cncf/k8s-conformance/blob/master/instructions.md This is the summary: Ran 276 of 4897 Specs in 9316.930 seconds FAIL! -- 275 Passed | 1 Failed | 0 Pending | 4621 Skipped --- FAIL: TestE2E (9316.96s) FAIL It looks like there’s an issue with the self-signed certificate that might have caused the failures and skips. E1226 13:31:14.402186 1 authentication.go:63] Unable to authenticate the request due to an error: [x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "front-proxy-ca"), x509: certificate signed by unknown authority] Some additional details can be found on launchpad, including the full tar.gz file created by sonobuoy. Hopefully somebody can take a look at it. Regards, Cristopher -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Sat Dec 28 09:07:46 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Sat, 28 Dec 2019 03:07:46 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <145d7b$cf4agn@fmsmga005.fm.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191228T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From cristopher.j.lemus.contreras at intel.com Sun Dec 29 08:59:21 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Sun, 29 Dec 2019 02:59:21 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <029d15$6b5l7t@orsmga008.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191229T023000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Ovidiu.Poncea at windriver.com Mon Dec 30 08:42:56 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Mon, 30 Dec 2019 08:42:56 +0000 Subject: [Starlingx-discuss] question about ceph or storage In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D85628F402@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D85628DE96@CDSMSX102.ccr.corp.intel.com> <4C60D9C5C8176C47874FFF36647AA19EA4747683@ALA-MBD.corp.ad.wrs.com> <56829C2A36C2E542B0CCB9854828E4D85628F3EA@CDSMSX102.ccr.corp.intel.com>, <56829C2A36C2E542B0CCB9854828E4D85628F402@CDSMSX102.ccr.corp.intel.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19EA4747C99@ALA-MBD.corp.ad.wrs.com> Hi Chen, It was used in the previous release for connecting a deployment (especially an openstack one) to an external Ceph cluster. I don't know what is the feature testing status for current release. Ovidiu ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Monday, December 23, 2019 9:18 AM To: Poncea, Ovidiu; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Sorry to disturb! Merry Christmas to all StarlingX member! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Monday, December 23, 2019 3:17 PM To: Poncea, Ovidiu ; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Hi Ovidiu For “system storage-backend-add -c ”, user could deploy another ceph cluster, and add as storage backend, correctly. And this is this command’s intention, correct? Martin, Chen SSP, Software Engineer 021-61164330 From: Poncea, Ovidiu > Sent: Wednesday, December 18, 2019 7:47 PM To: Chen, Haochuan Z >; Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: question about ceph or storage Hi Chen, see inline. ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Tuesday, December 17, 2019 10:46 AM To: Church, Robert; Poncea, Ovidiu Cc: 'starlingx-discuss at lists.starlingx.io' Subject: question about ceph or storage Hi Bob & Ovidiu Some question about ceph or storage. 1, What’s storage tier and storage profile? What’s the [Ovi] Storage tiering is equivalent with this: https://ceph.io/planet/deploying-ceph-with-storage-tiering/ . [Ovi] Profiles are managed by system storprofile-* and system host-apply-profile and are used to copy configuration from one node to another, identical node on initial provisioning. These profiles are only in system inventory, there is no Ceph equivalent. 2, why for duplex it request such puppet class dependency in ceph.pp? Is this request make all drbd config before class ceph? [Ovi] ceph-mon in AIO-DX is DRBD managed and it has a single, floating, monitor. On DX, when you swact, the monitor is stopped on the active controller and started on the standby controller. Drbd::Resource <| |> -> Class['::ceph'] And flag file “.node_ceph_configured”, to inform drbd make init setup before ceph config? 3, To launch ceph-mon, create a logical volume “ceph-mon-lv” and mount to /var/lib/ceph/mon, not directly mkdir “/var/lib/ceph/mon” for ceph-mon [Ovi] Ceph monitors have their own logical volume. They are managed through "system ceph-mon*" commands. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, December 11, 2019 4:46 PM To: Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu >; Qi, Mingyuan > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob Some question, what’s storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor See inline… From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: · We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. · The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. · The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. · I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Mon Dec 30 09:08:48 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Mon, 30 Dec 2019 03:08:48 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <029d15$6bcjfq@orsmga008.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191230T000000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From haochuan.z.chen at intel.com Mon Dec 30 13:51:41 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 30 Dec 2019 13:51:41 +0000 Subject: [Starlingx-discuss] question about ceph or storage In-Reply-To: <4C60D9C5C8176C47874FFF36647AA19EA4747C99@ALA-MBD.corp.ad.wrs.com> References: <56829C2A36C2E542B0CCB9854828E4D85628DE96@CDSMSX102.ccr.corp.intel.com> <4C60D9C5C8176C47874FFF36647AA19EA4747683@ALA-MBD.corp.ad.wrs.com> <56829C2A36C2E542B0CCB9854828E4D85628F3EA@CDSMSX102.ccr.corp.intel.com>, <56829C2A36C2E542B0CCB9854828E4D85628F402@CDSMSX102.ccr.corp.intel.com> <4C60D9C5C8176C47874FFF36647AA19EA4747C99@ALA-MBD.corp.ad.wrs.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D85629833F@CDSMSX102.ccr.corp.intel.com> Thanks Ovidiu! If "-c" is out of date, what about remove it later. Martin, Chen SSP, Software Engineer 021-61164330 From: Poncea, Ovidiu Sent: Monday, December 30, 2019 4:43 PM To: Chen, Haochuan Z ; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Hi Chen, It was used in the previous release for connecting a deployment (especially an openstack one) to an external Ceph cluster. I don't know what is the feature testing status for current release. Ovidiu ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Monday, December 23, 2019 9:18 AM To: Poncea, Ovidiu; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Sorry to disturb! Merry Christmas to all StarlingX member! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Monday, December 23, 2019 3:17 PM To: Poncea, Ovidiu >; Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: question about ceph or storage Hi Ovidiu For "system storage-backend-add -c ", user could deploy another ceph cluster, and add as storage backend, correctly. And this is this command's intention, correct? Martin, Chen SSP, Software Engineer 021-61164330 From: Poncea, Ovidiu > Sent: Wednesday, December 18, 2019 7:47 PM To: Chen, Haochuan Z >; Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: question about ceph or storage Hi Chen, see inline. ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Tuesday, December 17, 2019 10:46 AM To: Church, Robert; Poncea, Ovidiu Cc: 'starlingx-discuss at lists.starlingx.io' Subject: question about ceph or storage Hi Bob & Ovidiu Some question about ceph or storage. 1, What's storage tier and storage profile? What's the [Ovi] Storage tiering is equivalent with this: https://ceph.io/planet/deploying-ceph-with-storage-tiering/ . [Ovi] Profiles are managed by system storprofile-* and system host-apply-profile and are used to copy configuration from one node to another, identical node on initial provisioning. These profiles are only in system inventory, there is no Ceph equivalent. 2, why for duplex it request such puppet class dependency in ceph.pp? Is this request make all drbd config before class ceph? [Ovi] ceph-mon in AIO-DX is DRBD managed and it has a single, floating, monitor. On DX, when you swact, the monitor is stopped on the active controller and started on the standby controller. Drbd::Resource <| |> -> Class['::ceph'] And flag file ".node_ceph_configured", to inform drbd make init setup before ceph config? 3, To launch ceph-mon, create a logical volume "ceph-mon-lv" and mount to /var/lib/ceph/mon, not directly mkdir "/var/lib/ceph/mon" for ceph-mon [Ovi] Ceph monitors have their own logical volume. They are managed through "system ceph-mon*" commands. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, December 11, 2019 4:46 PM To: Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu >; Qi, Mingyuan > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob Some question, what's storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor See inline... From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: * We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. * The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. * The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. * I think it's potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don't think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don't know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ovidiu.Poncea at windriver.com Mon Dec 30 15:40:08 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Mon, 30 Dec 2019 15:40:08 +0000 Subject: [Starlingx-discuss] question about ceph or storage In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D85629833F@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D85628DE96@CDSMSX102.ccr.corp.intel.com> <4C60D9C5C8176C47874FFF36647AA19EA4747683@ALA-MBD.corp.ad.wrs.com> <56829C2A36C2E542B0CCB9854828E4D85628F3EA@CDSMSX102.ccr.corp.intel.com>, <56829C2A36C2E542B0CCB9854828E4D85628F402@CDSMSX102.ccr.corp.intel.com> <4C60D9C5C8176C47874FFF36647AA19EA4747C99@ALA-MBD.corp.ad.wrs.com>, <56829C2A36C2E542B0CCB9854828E4D85629833F@CDSMSX102.ccr.corp.intel.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19EA4747D45@ALA-MBD.corp.ad.wrs.com> I don't know if it's out of date :) it may just need to be retested if it wasn't for current release. ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Monday, December 30, 2019 3:51 PM To: Poncea, Ovidiu; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Thanks Ovidiu! If “-c” is out of date, what about remove it later. Martin, Chen SSP, Software Engineer 021-61164330 From: Poncea, Ovidiu Sent: Monday, December 30, 2019 4:43 PM To: Chen, Haochuan Z ; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Hi Chen, It was used in the previous release for connecting a deployment (especially an openstack one) to an external Ceph cluster. I don't know what is the feature testing status for current release. Ovidiu ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Monday, December 23, 2019 9:18 AM To: Poncea, Ovidiu; Church, Robert Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: question about ceph or storage Sorry to disturb! Merry Christmas to all StarlingX member! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Monday, December 23, 2019 3:17 PM To: Poncea, Ovidiu >; Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: question about ceph or storage Hi Ovidiu For “system storage-backend-add -c ”, user could deploy another ceph cluster, and add as storage backend, correctly. And this is this command’s intention, correct? Martin, Chen SSP, Software Engineer 021-61164330 From: Poncea, Ovidiu > Sent: Wednesday, December 18, 2019 7:47 PM To: Chen, Haochuan Z >; Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: question about ceph or storage Hi Chen, see inline. ________________________________ From: Chen, Haochuan Z [haochuan.z.chen at intel.com] Sent: Tuesday, December 17, 2019 10:46 AM To: Church, Robert; Poncea, Ovidiu Cc: 'starlingx-discuss at lists.starlingx.io' Subject: question about ceph or storage Hi Bob & Ovidiu Some question about ceph or storage. 1, What’s storage tier and storage profile? What’s the [Ovi] Storage tiering is equivalent with this: https://ceph.io/planet/deploying-ceph-with-storage-tiering/ . [Ovi] Profiles are managed by system storprofile-* and system host-apply-profile and are used to copy configuration from one node to another, identical node on initial provisioning. These profiles are only in system inventory, there is no Ceph equivalent. 2, why for duplex it request such puppet class dependency in ceph.pp? Is this request make all drbd config before class ceph? [Ovi] ceph-mon in AIO-DX is DRBD managed and it has a single, floating, monitor. On DX, when you swact, the monitor is stopped on the active controller and started on the standby controller. Drbd::Resource <| |> -> Class['::ceph'] And flag file “.node_ceph_configured”, to inform drbd make init setup before ceph config? 3, To launch ceph-mon, create a logical volume “ceph-mon-lv” and mount to /var/lib/ceph/mon, not directly mkdir “/var/lib/ceph/mon” for ceph-mon [Ovi] Ceph monitors have their own logical volume. They are managed through "system ceph-mon*" commands. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, December 11, 2019 4:46 PM To: Church, Robert > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu >; Qi, Mingyuan > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob Some question, what’s storage tier and storage profile? As you said, we no longer manage pool and pg num, is this also unnecessary and we should remove it? BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Wednesday, December 4, 2019 12:09 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor See inline… From: "Chen, Haochuan Z" > Date: Tuesday, December 3, 2019 at 2:18 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: RE: ceph ops enabling in sysinv-conductor Hi Bob 1, we could update default pg num in sysinv/helm/rbd_provisioner.py. But when platform-integ-apps applied, maybe user has not add osded, so the default pg num is still for user reference. And for user override, I think it should add in ceph-pool-audit, stx-platform-helm/helm-charts/ceph-pools-audit/templates/job-ceph-pools-audit.yaml. but pg num update will make rebalance which cause management network jam. [RTC] For a more robust solution, consider the following: • We can update _met_app_apply_prerequisites () in sysinv/conductor.py for platform-integ-apps to require OSDs to be provisioned prior to applying the application. This will ensure an accurate OSD view when the application is initially applied. • The provisioner system overrides (in sysinv/helm/rbd_provisioner.py) should have the ability to calculate and set an optimal PG number to avoid generating a warning. • The rbd provisioner chart should also support setting a new chunk size (if greater than the existing size) to update the PG num. This will support user PG updates from the helm-overrides API. • I think it’s potentially a good idea for ceph-pool-audit to support adjusting the PG numbers as well but this is tricky as I don’t think we can reduce PG_num in Mimic without creating a new pool and copying the contents (pg_autoscaling is added in Nautilis). This audit code would have to be very specific in adjusting the PG num as installing applications that add additional pools will change the PG_num distribution. 2, the above case is only for system application. Any more alarm for PG number, request user should manage. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Church, Robert > Sent: Tuesday, December 3, 2019 10:34 AM To: Chen, Haochuan Z > Cc: 'starlingx-discuss at lists.starlingx.io' >; Poncea, Ovidiu > Subject: Re: ceph ops enabling in sysinv-conductor Hi Martin, In STX 1.0, we ran bare-metal openstack services for which we created and managed specific ceph storage pools for glance, cinder, nova, and swift. In this environment, we managed their creation and parameters (PGs and quotas) as the cluster scaled up. With the move to containerizing the openstack services in STX 2.0, pool creation is now driven mostly by helm charts packaged into an application (radosgw pools are the exception). Since we don’t know what additional application(s) will be deployed and what pools may be created, we are currently longer managing pools as in STX 1.0. These related functions in sysinv/conductor/ceph.py are remaining from STX 1.0 and need to be removed and/or repurposed to meet any new requirements. I took a look at the LP logs and it looks like we only have a single application applied: platform-integ-apps. This currently results in a single pool for 6 OSDs cluster: id: 6231df84-33be-4aa4-82ea-7408e0f2421c health: HEALTH_WARN too few PGs per OSD (21 < min 30) services: mon: 3 daemons, quorum controller-0,controller-1,storage-0 mgr: controller-0(active), standbys: controller-1 osd: 6 osds: 6 up, 6 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 645 MiB used, 5.4 TiB / 5.4 TiB avail pgs: 64 active+clean Since every installation will install platform-integ-apps, I think we should do the following: 1. Update the chunk_size calculation in sysinv/helm/rbd_provisioner.py to be dynamically calculated based on the number OSDs provisioned in the cluster. As this may be the only pool created, it should meet the minimum size characteristics to avoid a ceph warning. 2. Update the rbd-provisioner helm chart to support explicitly setting the pg_num based on the chunk size. Do this to allow setting a user override for the chunk size so that we can re-apply platform-integ-apps and explicitly set new values. Regards, Bob From: "Chen, Haochuan Z" > Date: Monday, December 2, 2019 at 1:15 AM To: Robert Church > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ovidiu Poncea > Subject: ceph ops enabling in sysinv-conductor Hi Bob I find some many function in sysinv-conductor/ceph.py, which could manage ceph cluster, such as create/delete/configure pool, audit pg etc. But why these function is not enabled? Or plan to request user to manage ceph cluster, such as create pool and configure pg num? Now I checked this issue, pg too few, as user maybe deploy few osd, which make alarm. So for such issue, request user to decide correct pg num or user could ignore? https://bugs.launchpad.net/starlingx/+bug/1844164 BR! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Mon Dec 30 15:43:20 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Mon, 30 Dec 2019 15:43:20 +0000 Subject: [Starlingx-discuss] Sanity Master Test - ISO 20191230 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-December-30 (link) Status: GREEN =========================================== Sanity Test executed in Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed in Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Thanks & Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenw66 at chinaunicom.cn Tue Dec 31 06:28:22 2019 From: chenw66 at chinaunicom.cn (=?gb2312?B?V2VpIENoZW4owarNqLyvzcXBqs2o1MbK/b7d09DP3rmry76xvrK/KQ==?=) Date: Tue, 31 Dec 2019 06:28:22 +0000 Subject: [Starlingx-discuss] Question about distributed openstack In-Reply-To: <1577772988426.88956@chinaunicom.cn> References: <1577772988426.88956@chinaunicom.cn> Message-ID: <1577773690103.64126@chinaunicom.cn> ? Dear Sir/Madam, I'm an engineer that use the starlingx in our product. I setup a central cloud and an edge. Now I hit an issue about how to share the image from center to the edge. What I thought is, write one application on the edge, get the image list from center, then pull by demand. It's said that in the near future like starlingx 4.0, the distributed openstack will be leveraged and there's something about image process. So, could you share any idea/material about it? Thanks & Regards, Wei 如果您错误接收了该邮件,请通过电子邮件立即通知我们。请回复邮件到 hqs-spmc at chinaunicom.cn,即可以退订此邮件。我们将立即将您的信息从我们的发送目录中删除。 If you have received this email in error please notify us immediately by e-mail. Please reply to hqs-spmc at chinaunicom.cn ,you can unsubscribe from this mail. We will immediately remove your information from send catalogue of our. -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Dec 31 07:30:20 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 31 Dec 2019 07:30:20 +0000 Subject: [Starlingx-discuss] no non-distro meeting on Jan 1st Message-ID: Hi All: Happy New Year, and just reminder there is no distro project meeting tomorrow (Jan 1st ) . Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Tue Dec 31 08:59:28 2019 From: cristopher.j.lemus.contreras at intel.com (cristopher.j.lemus.contreras at intel.com) Date: Tue, 31 Dec 2019 02:59:28 -0600 Subject: [Starlingx-discuss] StarlingX platform-integ-apps docker images Message-ID: <029d15$6blqgd@orsmga008.jf.intel.com> List of docker images required for "platform-integ-apps": BUILD_ID="20191231T000000Z" rabbitmq:3.7-management k8s.gcr.io/kube-proxy:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2 k8s.gcr.io/etcd:3.3.15-0 k8s.gcr.io/coredns:1.6.2 quay.io/airshipit/armada:8a1638098f88d92bf799ef4934abe569789b885e-ubuntu_bionic quay.io/calico/node:v3.6.2 quay.io/calico/cni:v3.6.2 quay.io/calico/kube-controllers:v3.6.2 rabbitmq:3.7.13-management rabbitmq:3.7.13 gcr.io/kubernetes-helm/tiller:v2.13.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 openstackhelm/mariadb:10.2.18 quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 quay.io/stackanetes/kubernetes-entrypoint:v0.3.1 mariadb:10.2.13 memcached:1.5.5 k8s.gcr.io/pause:3.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 nginx:1.13.3 gcr.io/google_containers/defaultbackend:1.0 From Ghada.Khalil at windriver.com Tue Dec 31 15:19:13 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 31 Dec 2019 15:19:13 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Release meeting Message-ID: <151EE31B9FCCA54397A757BC674650F0C1609238@ALA-MBD.corp.ad.wrs.com> ** We will reconvene the release meetings on Jan 9 Weekly meeting on Thursday 11AM PT / 1900 UTC Zoom Link: https://zoom.us/j/342730236 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1711 bytes Desc: not available URL: