From pvmpublic at gmail.com Thu Oct 1 01:52:02 2020 From: pvmpublic at gmail.com (Pratik M.) Date: Thu, 1 Oct 2020 07:22:02 +0530 Subject: [Starlingx-discuss] SRIOV in starlingx In-Reply-To: References: Message-ID: Hi, a.: I believe you should be able to lock, apply these steps, and unlock. b.: I don't know if StarlingX has an option to specify per node pools. Will defer to experts. But if the nodes are in one L2, they would typically be in one cluster and thus the VF MAC assignment would be arbitrated by cluster wide neutron/CNI, right? If each node is one cluster, maybe the user would need to annotate the pods with static MACs. BR On Tue, 29 Sep, 2020, 23:23 Sriram, wrote: > Hi, > > How do we ensure the uniqueness of VF mac addresses across all the nodes > in the k8s cluster formed on edge nodes. > Please let me know if this problem is addressed by starlingX or if it is > taken care of by some other means. > > Regards, > Sriram > > On Fri, Sep 25, 2020 at 1:20 PM Sriram wrote: > >> Hi Pratik, >> >> Thanks for your reply. >> >> a. Can these steps be done after the installation is complete, now that I >> have already installed. >> b. How do we ensure the uniqueness of VF mac addresses across all the >> nodes in the k8s cluster formed on edge nodes. >> >> Regards, >> Sriram >> >> On Fri, Sep 25, 2020 at 1:02 PM Pratik M. wrote: >> >>> Hi, >>> You would need to do: >>> # system host-label-assign controller-0 sriovdp=enabled >>> # system host-if-modify controller-0 -c pci-sriov -n sriov0 >>> -N >>> # system interface-datanetwork-assign controller-0 >>> >>> # system host-unlock >>> >>> And that should populate the /etc/pcidp/config.json >>> >>> Ref: >>> https://wiki.openstack.org/wiki/StarlingX/Networking >>> Steven Webster's helpful comments in >>> https://bugs.launchpad.net/starlingx/+bug/1891889 >>> >>> Thanks >>> >>> On Thu, Sep 24, 2020 at 3:04 PM Sriram wrote: >>> >>>> Hi, >>>> >>>> I have installed distributed starlingx 4.0 in "All in one Duplex" mode. >>>> There are two nodes in the central cloud and two in the edge cloud. >>>> >>>> I have enabled SRIOV in bios settings of edge cloud nodes and set total >>>> VFs as 16. >>>> >>>> After that, while installing starlingX I followed the steps to enable >>>> SRIOV. >>>> >>>> system host-label-assign controller-0 sriovdp=enabled >>>>> system host-memory-modify controller-0 0 -1G 100 >>>>> system host-memory-modify controller-0 1 -1G 100 >>>> >>>> and ran these steps for controller-1 as well. >>>> >>>> As I understand the first step would label the node "controller-0 and >>>> controller-1" as "sriovdp=enabled" and set the number of 1G huge pages to >>>> 200. >>>> Once the installation was complete, I saw that k8s sriov-device plugin >>>> was not coming up. It complained that the resource list was empty. >>>> >>>> I had to set >>>> "/sys/devices/pci0000:3a/0000:3a:00.0/0000:3b:00.0/sriov_numvfs" to 8 (I >>>> needed 8 virtual interfaces) and update the resource list in >>>> /etc/pcidp/config.json >>>> >>>> { >>>>> "resourceList": [ >>>>> { >>>>> "resourceName": "bcm_sriov_netdevice", >>>>> "selectors": { >>>>> "vendors": ["14e4"], >>>>> "devices": ["16dc"], >>>>> "drivers": ["bnxt_en"], >>>>> "pfNames": ["enp59s0f0#0-7"] >>>>> } >>>>> } >>>>> ] >>>>> } >>>> >>>> >>>> to see that sriov-dp comes up properly. >>>> >>>> Is there any way to pass on the number of VF's( sriov_numvfs )required >>>> per node and resourcelist during the time of installation when we label the >>>> nodes as sriovdp=enabled >>>> >>>> Regards, >>>> Sriram >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sriram.ec at gmail.com Thu Oct 1 07:55:19 2020 From: sriram.ec at gmail.com (Sriram) Date: Thu, 1 Oct 2020 13:25:19 +0530 Subject: [Starlingx-discuss] Distruted StarlingX 4.0 - Worker nodes not booting up Message-ID: Hi, I'm trying to bring up the edge cloud with 3 nodes (1 controller and 2 worker nodes) with starlingX 4.0 - distributed cloud. Central cloud is up and running with All in One Duplex 2 controller configuration. I was able to bring up the controller-0 in edge cloud using iso (virtual cd/dvd mount) and was able to configure the personality for the other nodes as workers. But worker-0 and worker-1 are stuck in pxe boot for more than 2hrs. Any suggestions? In "Standard controller with storage" configuration, is having 2 controllers compulsory ? https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/controller_storage.html This document says it supports 2 controllers and upto 10 worker nodes. Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From sriram.ec at gmail.com Thu Oct 1 08:00:26 2020 From: sriram.ec at gmail.com (Sriram) Date: Thu, 1 Oct 2020 13:30:26 +0530 Subject: [Starlingx-discuss] SRIOV in starlingx In-Reply-To: References: Message-ID: Thanks Pratik. There is no openstack-neutron component. We have only one k8s cluster in the edge cloud in one subnet. Some of the docs that I read suggest assigning mac addresses to vf's using "ip link" command. I'm not sure if that's the only way forward. Regards, Sriram On Thu, Oct 1, 2020 at 7:22 AM Pratik M. wrote: > Hi, > a.: I believe you should be able to lock, apply these steps, and unlock. > > b.: I don't know if StarlingX has an option to specify per node pools. > Will defer to experts. But if the nodes are in one L2, they would typically > be in one cluster and thus the VF MAC assignment would be arbitrated by > cluster wide neutron/CNI, right? If each node is one cluster, maybe the > user would need to annotate the pods with static MACs. > > BR > > > On Tue, 29 Sep, 2020, 23:23 Sriram, wrote: > >> Hi, >> >> How do we ensure the uniqueness of VF mac addresses across all the nodes >> in the k8s cluster formed on edge nodes. >> Please let me know if this problem is addressed by starlingX or if it is >> taken care of by some other means. >> >> Regards, >> Sriram >> >> On Fri, Sep 25, 2020 at 1:20 PM Sriram wrote: >> >>> Hi Pratik, >>> >>> Thanks for your reply. >>> >>> a. Can these steps be done after the installation is complete, now that >>> I have already installed. >>> b. How do we ensure the uniqueness of VF mac addresses across all the >>> nodes in the k8s cluster formed on edge nodes. >>> >>> Regards, >>> Sriram >>> >>> On Fri, Sep 25, 2020 at 1:02 PM Pratik M. wrote: >>> >>>> Hi, >>>> You would need to do: >>>> # system host-label-assign controller-0 sriovdp=enabled >>>> # system host-if-modify controller-0 -c pci-sriov -n sriov0 >>>> -N >>>> # system interface-datanetwork-assign controller-0 >>>> >>>> # system host-unlock >>>> >>>> And that should populate the /etc/pcidp/config.json >>>> >>>> Ref: >>>> https://wiki.openstack.org/wiki/StarlingX/Networking >>>> Steven Webster's helpful comments in >>>> https://bugs.launchpad.net/starlingx/+bug/1891889 >>>> >>>> Thanks >>>> >>>> On Thu, Sep 24, 2020 at 3:04 PM Sriram wrote: >>>> >>>>> Hi, >>>>> >>>>> I have installed distributed starlingx 4.0 in "All in one Duplex" >>>>> mode. There are two nodes in the central cloud and two in the edge cloud. >>>>> >>>>> I have enabled SRIOV in bios settings of edge cloud nodes and set >>>>> total VFs as 16. >>>>> >>>>> After that, while installing starlingX I followed the steps to enable >>>>> SRIOV. >>>>> >>>>> system host-label-assign controller-0 sriovdp=enabled >>>>>> system host-memory-modify controller-0 0 -1G 100 >>>>>> system host-memory-modify controller-0 1 -1G 100 >>>>> >>>>> and ran these steps for controller-1 as well. >>>>> >>>>> As I understand the first step would label the node "controller-0 and >>>>> controller-1" as "sriovdp=enabled" and set the number of 1G huge pages to >>>>> 200. >>>>> Once the installation was complete, I saw that k8s sriov-device plugin >>>>> was not coming up. It complained that the resource list was empty. >>>>> >>>>> I had to set >>>>> "/sys/devices/pci0000:3a/0000:3a:00.0/0000:3b:00.0/sriov_numvfs" to 8 (I >>>>> needed 8 virtual interfaces) and update the resource list in >>>>> /etc/pcidp/config.json >>>>> >>>>> { >>>>>> "resourceList": [ >>>>>> { >>>>>> "resourceName": "bcm_sriov_netdevice", >>>>>> "selectors": { >>>>>> "vendors": ["14e4"], >>>>>> "devices": ["16dc"], >>>>>> "drivers": ["bnxt_en"], >>>>>> "pfNames": ["enp59s0f0#0-7"] >>>>>> } >>>>>> } >>>>>> ] >>>>>> } >>>>> >>>>> >>>>> to see that sriov-dp comes up properly. >>>>> >>>>> Is there any way to pass on the number of VF's( sriov_numvfs )required >>>>> per node and resourcelist during the time of installation when we label the >>>>> nodes as sriovdp=enabled >>>>> >>>>> Regards, >>>>> Sriram >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Starlingx-discuss mailing list >>>>> Starlingx-discuss at lists.starlingx.io >>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sriram.ec at gmail.com Fri Oct 2 03:10:48 2020 From: sriram.ec at gmail.com (Sriram) Date: Fri, 2 Oct 2020 08:40:48 +0530 Subject: [Starlingx-discuss] Distruted StarlingX 4.0 - Worker nodes not booting up In-Reply-To: References: Message-ID: Hi, I'm using rel-20.06 software of starlingX-4.0. Initial connection happens to pxe server on controller-0. I do see some packets between worker node and controller-0. Below tcpdump shows those packets [root at controller-0 ~(keystone_admin)]# tcpdump -i any port 69 or port 53 or > port 67 -nn > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 > bytes > 10:42:58.263992 ethertype IPv4, IP 0.0.0.0.68 > 255.255.255.255.67: > BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 > 10:42:58.263992 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request > from f0:d4:e2:e9:8e:c4, length 548 > 10:42:58.264290 IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, > Reply, length 305 > 10:42:58.264299 ethertype IPv4, IP 192.168.22.102.67 > 255.255.255.255.68: > BOOTP/DHCP, Reply, length 305 > 10:43:02.301357 ethertype IPv4, IP 0.0.0.0.68 > 255.255.255.255.67: > BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 > 10:43:02.301357 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request > from f0:d4:e2:e9:8e:c4, length 548 > 10:43:02.310717 IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, > Reply, length 305 > 10:43:02.310725 ethertype IPv4, IP 192.168.22.102.67 > 255.255.255.255.68: > BOOTP/DHCP, Reply, length 305 > 10:43:02.311555 ethertype IPv4, IP 169.254.202.138.2070 > > 169.254.202.1.69: 27 RRQ "pxelinux.0" octet tsize 0 > 10:43:02.311555 IP 169.254.202.138.2070 > 169.254.202.1.69: 27 RRQ > "pxelinux.0" octet tsize 0 > 10:43:02.311927 ethertype IPv4, IP 169.254.202.138.2071 > > 169.254.202.1.69: 32 RRQ "pxelinux.0" octet blksize 1456 > 10:43:02.311927 IP 169.254.202.138.2071 > 169.254.202.1.69: 32 RRQ > "pxelinux.0" octet blksize 1456 > 10:43:02.358861 ethertype IPv4, IP 169.254.202.138.49152 > > 169.254.202.1.69: 79 RRQ > "pxelinux.cfg/44454c4c-4800-104c-8034-cac04f473333" octet tsize 0 blksize > 1408 > .................. > > > > *10:43:40.347690 ethertype IPv4, IP 169.254.202.138.49156 > > 169.254.202.1.69: 57 RRQ "rel-20.06/installer-bzImage" octet tsize 0 > blksize 140810:43:40.347690 IP 169.254.202.138.49156 > 169.254.202.1.69: > 57 RRQ "rel-20.06/installer-bzImage" octet tsize 0 blksize > 140810:43:41.035183 ethertype IPv4, IP 169.254.202.138.49157 > > 169.254.202.1.69: 56 RRQ "rel-20.06/installer-initrd" octet tsize 0 > blksize 140810:43:41.035183 IP 169.254.202.138.49157 > 169.254.202.1.69: > 56 RRQ "rel-20.06/installer-initrd" octet tsize 0 blksize 1408* 169.254.202.138 is the worker node ip and 169.254.202.1 is the controller-0 ip. Above 4 are the last packets exchanged and after that no communication is seen. Worker node does not proceed further in installation. Pxe network in the controller node is on vlan-143 and I have enabled the same vlan in bios of the worker node. Please let me know if any info is required. Regards, Sriram On Thu, Oct 1, 2020 at 1:25 PM Sriram wrote: > Hi, > > I'm trying to bring up the edge cloud with 3 nodes (1 controller and 2 > worker nodes) with starlingX 4.0 - distributed cloud. > Central cloud is up and running with All in One Duplex 2 controller > configuration. > > I was able to bring up the controller-0 in edge cloud using iso (virtual > cd/dvd mount) and was able to configure the personality for the other > nodes as workers. But worker-0 and worker-1 are stuck in pxe boot for more > than 2hrs. Any suggestions? > > In "Standard controller with storage" configuration, is having 2 > controllers compulsory ? > > https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/controller_storage.html > This document says it supports 2 controllers and upto 10 worker nodes. > > Regards, > Sriram > -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sat Oct 3 01:31:10 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 2 Oct 2020 21:31:10 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_publish - Build # 1689 - Failure! Message-ID: <202388272.171.1601688671508.JavaMail.javamailuser@localhost> Project: STX_publish Build #: 1689 Status: Failure Timestamp: 20201003T013108Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20201003T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-compiler/20201003T013000Z OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20201003T013000Z/logs TIMESTAMP: 20201003T013000Z PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/compiler/20201003T013000Z/inputs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/compiler/20201003T013000Z/logs MASTER_JOB_NAME: STX_build_layer_compiler_master_master LAYER: compiler PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/compiler/20201003T013000Z/outputs MY_REPO_ROOT: /localdisk/designer/jenkins/master-compiler PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/compiler From build.starlingx at gmail.com Sat Oct 3 01:31:13 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 2 Oct 2020 21:31:13 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_compiler_master_master - Build # 300 - Failure! Message-ID: <1852619633.174.1601688673870.JavaMail.javamailuser@localhost> Project: STX_build_layer_compiler_master_master Build #: 300 Status: Failure Timestamp: 20201003T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20201003T013000Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From bnovickovs at weecodelab.com Sun Oct 4 18:51:44 2020 From: bnovickovs at weecodelab.com (bnovickovs at weecodelab.com) Date: Sun, 04 Oct 2020 19:51:44 +0100 Subject: [Starlingx-discuss] Openstack related question - can I get response on this ticket please Message-ID: <5ed6d50149705aa41a4275cf858a35a3@weecodelab.com> Hi, Can I get response on that ticket please http://lists.starlingx.io/pipermail/starlingx-discuss/2020-September/009679.html From Sriram.Dharwadkar at commscope.com Mon Oct 5 08:00:31 2020 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Mon, 5 Oct 2020 08:00:31 +0000 Subject: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded Message-ID: Hi, I have installed distributed starlingx 4.0 in "All in one Duplex" mode. There are two nodes in the central cloud and two in the edge cloud. Central cloud is up and running. For the edge cloud configuration, in the bootstrap override file, I have configured the private registry. From the central cloud, I was able to add the edge cloud. Images required for starlingX installation are downloaded from private registry and installation goes through w/o any issues. [sysadmin at controller-0 ~(keystone_admin)]$ dcmanager subcloud list +----+------+------------+--------------+---------------+---------+ | id | name | management | availability | deploy status | sync | +----+------+------------+--------------+---------------+---------+ | 47 | edge | unmanaged | offline | complete | unknown | +----+------+------------+--------------+---------------+---------+ [sysadmin at controller-0 ~(keystone_admin)]$ Then I followed the steps mentioned in the document https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/aio_duplex_install_kubernetes.html#configure-controller-0 And finally did unlock of controller-0. System went for reboot and it came up successfully. After I see availability as "degraded" [sysadmin at controller-0 log(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | degraded | +----+--------------+-------------+----------------+-------------+--------------+ Tail -f /var/log/sysinv.log shows - prerequisites not met.. ysinv 2020-10-05 07:57:12.271 96246 INFO ceph_client [-] Result: {u'waiting': [], u'has_failed': False, u'state': u'success', u'is_waiting': False, u'running': [], u'failed': [], u'finished': [{u'outb': u'{"fsid":"50634828-68b2-43c4-aaa0-ebf53f6e675a","health":{"checks":{},"status":"HEALTH_OK","overall_status":"HEALTH_WARN"},"election_epoch":7,"quorum":[0],"quorum_names":["controller"],"monmap":{"epoch":1,"fsid":"50634828-68b2-43c4-aaa0-ebf53f6e675a","modified":"2020-10-05 06:53:11.461060","created":"2020-10-05 06:53:11.461060","features":{"persistent":["kraken","luminous","mimic","osdmap-prune"],"optional":[]},"mons":[{"rank":0,"name":"controller","addr":"192.168.22.101:6789/0","public_addr":"192.168.22.101:6789/0"}]},"osdmap":{"osdmap":{"epoch":10,"num_osds":1,"num_up_osds":1,"num_in_osds":1,"full":false,"nearfull":false,"num_remapped_pgs":0}},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":112181248,"bytes_avail":1197865828352,"bytes_total":1197978009600},"fsmap":{"epoch":1,"by_rank":[]},"mgrmap":{"epoch":48,"active_gid":24132,"active_name":"controller-0","active_addr":"192.168.22.102:6804/93283","available":true,"standbys":[],"modules":["restful"],"available_modules":[{"name":"balancer","can_run":true,"error_string":""},{"name":"dashboard","can_run":false,"error_string":"Frontend assets not found: incomplete build?"},{"name":"hello","can_run":true,"error_string":""},{"name":"iostat","can_run":true,"error_string":""},{"name":"localpool","can_run":true,"error_string":""},{"name":"prometheus","can_run":true,"error_string":""},{"name":"restful","can_run":true,"error_string":""},{"name":"selftest","can_run":true,"error_string":""},{"name":"smart","can_run":true,"error_string":""},{"name":"status","can_run":true,"error_string":""},{"name":"telegraf","can_run":true,"error_string":""},{"name":"telemetry","can_run":true,"error_string":""},{"name":"zabbix","can_run":true,"error_string":""}],"services":{"restful":"https://controller-0:7999/"}},"servicemap":{"epoch":1,"modified":"0.000000","services":{}}}\n', u'outs': u'', u'command': u'status format=json'}], u'is_finished': True, u'id': u'140404196232080'} sysinv 2020-10-05 07:57:12.284 96246 INFO sysinv.conductor.manager [-] Platform managed application platform-integ-apps: Prerequisites not met. sysinv 2020-10-05 07:57:12.286 96246 INFO sysinv.conductor.manager [-] Platform managed application oidc-auth-apps: Prerequisites not met. sysinv 2020-10-05 07:57:12.291 96246 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None sysinv 2020-10-05 07:57:12.293 96246 INFO sysinv.ap I'm not sure why availability is shown as degraded. Any help would be appreciated. Let me know if any logs are required. Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From susendra.selvaraj at intel.com Tue Oct 6 08:00:58 2020 From: susendra.selvaraj at intel.com (Selvaraj, Susendra) Date: Tue, 6 Oct 2020 08:00:58 +0000 Subject: [Starlingx-discuss] Debranding reviews please In-Reply-To: <6876c3f5-fbb3-8d72-5007-894d1556995c@windriver.com> References: <6876c3f5-fbb3-8d72-5007-894d1556995c@windriver.com> Message-ID: Hi Scott, we could go ahead to merge patches for - Preparing the tool chain for the switch. Who could give +2 for below patches - https://review.opendev.org/#/c/750467 https://review.opendev.org/#/c/750042 Regards, Susendra. -----Original Message----- From: Scott Little Sent: Friday, September 25, 2020 2:05 AM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Debranding reviews please I should have that for you tomorrow Scott On 2020-09-24 3:50 p.m., Jones, Bruce E wrote: > Scott, thank you for driving this! > > Are there any updates needed to the documentation (starlingx/docs project) as a result of this change? > > brucej > > -----Original Message----- > From: Scott Little > Sent: Thursday, September 24, 2020 12:22 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Debranding reviews please > > I know the debranding topic still says 'wip' but the code is ready for review. > > > Reviews in intended delivery order ... > > > Preparing the tool chain for the switch ... > > https://review.opendev.org/#/c/750041 > > https://review.opendev.org/#/c/750467 > > https://review.opendev.org/#/c/750042 > > https://review.opendev.org/#/c/749974 > > Then as a set  ... rename cgcs-tis-repo to local-repo ... > > https://review.opendev.org/#/c/687401 > > https://review.opendev.org/#/c/749997 > > https://review.opendev.org/#/c/754129 > > And the next set   ... rename cgcs-centos-repo to centos-repo > > https://review.opendev.org/#/c/687403 > > https://review.opendev.org/#/c/749998 > > https://review.opendev.org/#/c/750043 > > Finally > > https://review.opendev.org/#/c/754130 > > > The full set for review is here: > > https://review.opendev.org/#/q/topic:debrand_wip+(status:open+OR+status:merged) > > > Thanks Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From claire at openstack.org Tue Oct 6 12:36:23 2020 From: claire at openstack.org (claire at openstack.org) Date: Tue, 06 Oct 2020 12:36:23 +0000 Subject: [Starlingx-discuss] Updated invitation: Weekly StarlingX mtg 7:00am Pacific Timezone @ Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 (CST) (starlingx-discuss@lists.starlingx.io) Message-ID: <000000000000c1ea7905b0ffd8fa@google.com> This event has been changed. Title: Weekly StarlingX mtg 7:00am Pacific Timezone Agenda & Notes: https://wiki.openstack.org/wiki/StarlingX#Meetings When: Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 Central Time - Chicago (changed) Where: https://zoom.us/j/342730236 Calendar: starlingx-discuss at lists.starlingx.io Who: * claire at openstack.org - organizer * Ildiko Vancsa * starlingx-discuss at lists.starlingx.io Event details: https://www.google.com/calendar/event?action=VIEW&eid=XzZvb2syYzloNjkxMzBiOWg4Z3A0Y2I5azg4bzM0YjlwNmgxNDJiYTI2b3MzMmc5ZzZoMWs2ZHBuNmsgc3Rhcmxpbmd4LWRpc2N1c3NAbGlzdHMuc3Rhcmxpbmd4Lmlv&tok=MjAjY2xhaXJlQG9wZW5zdGFjay5vcmcyNzEzYTdmZDQ3NjI0OWViZWEyNjA2ZTUzMjA3ZjZhODliOGZkMzAy&ctz=America%2FChicago&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account starlingx-discuss at lists.starlingx.io because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2395 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2460 bytes Desc: not available URL: From ildiko at openstack.org Tue Oct 6 13:34:06 2020 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 6 Oct 2020 15:34:06 +0200 Subject: [Starlingx-discuss] Updated invitation: Weekly StarlingX mtg 7:00am Pacific Timezone @ Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 (CST) (starlingx-discuss@lists.starlingx.io) In-Reply-To: <000000000000c1ea7905b0ffd8fa@google.com> References: <000000000000c1ea7905b0ffd8fa@google.com> Message-ID: <1AFAFE4A-76C5-4EF0-82D0-19DB9475B229@openstack.org> Hi, Please note that Claire’s calendar invite was an ld one with old Zoom information that she has just removed. The TSC and Community calls are running __unchanged on Wednesdays at 7am Pacific / 1400 UTC__. For up to date meeting and dial in information please see the Meetings wiki page here: https://wiki.openstack.org/wiki/Starlingx/Meetings Please let me know if you have any questions. Thanks, Ildikó > On Oct 6, 2020, at 14:36, claire at openstack.org wrote: > > This event has been changed. > Weekly StarlingX mtg 7:00am Pacific Timezone > When > Changed: Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 Central Time - Chicago > Where > https://zoom.us/j/342730236 (map) > Calendar > starlingx-discuss at lists.starlingx.io > Who > • > claire at openstack.org - organizer > • > Ildiko Vancsa > • > starlingx-discuss at lists.starlingx.io > more details » > Agenda & Notes: https://wiki.openstack.org/wiki/StarlingX#Meetings > Going (starlingx-discuss at lists.starlingx.io)? All events in this series: Yes - Maybe - No more options » > Invitation from Google Calendar > > You are receiving this courtesy email at the account starlingx-discuss at lists.starlingx.io because you are an attendee of this event. > > To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. > > Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn More. > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maryx.camp at intel.com Tue Oct 6 20:08:52 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Tue, 6 Oct 2020 20:08:52 +0000 Subject: [Starlingx-discuss] Debranding reviews please In-Reply-To: References: <6876c3f5-fbb3-8d72-5007-894d1556995c@windriver.com> Message-ID: Scott, there is an older review in StarlingX/docs from Saul that may align with some of the debranding changes (removing pike). Can you take a look at: https://review.opendev.org/#/c/693761/ If I can help with anything on the docs side (abandon that old review and kick off a new one?), please let me know. thanks, Mary Camp Kelly Services Technical Writer | maryx.camp at intel.com -----Original Message----- From: Selvaraj, Susendra Sent: Tuesday, October 6, 2020 4:01 AM To: Scott Little ; Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Debranding reviews please Hi Scott, we could go ahead to merge patches for - Preparing the tool chain for the switch. Who could give +2 for below patches - https://review.opendev.org/#/c/750467 https://review.opendev.org/#/c/750042 Regards, Susendra. -----Original Message----- From: Scott Little Sent: Friday, September 25, 2020 2:05 AM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Debranding reviews please I should have that for you tomorrow Scott On 2020-09-24 3:50 p.m., Jones, Bruce E wrote: > Scott, thank you for driving this! > > Are there any updates needed to the documentation (starlingx/docs project) as a result of this change? > > brucej > > -----Original Message----- > From: Scott Little > Sent: Thursday, September 24, 2020 12:22 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Debranding reviews please > > I know the debranding topic still says 'wip' but the code is ready for review. > > > Reviews in intended delivery order ... > > > Preparing the tool chain for the switch ... > > https://review.opendev.org/#/c/750041 > > https://review.opendev.org/#/c/750467 > > https://review.opendev.org/#/c/750042 > > https://review.opendev.org/#/c/749974 > > Then as a set  ... rename cgcs-tis-repo to local-repo ... > > https://review.opendev.org/#/c/687401 > > https://review.opendev.org/#/c/749997 > > https://review.opendev.org/#/c/754129 > > And the next set   ... rename cgcs-centos-repo to centos-repo > > https://review.opendev.org/#/c/687403 > > https://review.opendev.org/#/c/749998 > > https://review.opendev.org/#/c/750043 > > Finally > > https://review.opendev.org/#/c/754130 > > > The full set for review is here: > > https://review.opendev.org/#/q/topic:debrand_wip+(status:open+OR+status:merged) > > > Thanks Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko at openstack.org Tue Oct 6 20:45:25 2020 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 6 Oct 2020 22:45:25 +0200 Subject: [Starlingx-discuss] StarlingX TSC, PL, TL election - Nomination period started Message-ID: <6582479F-20E7-4938-882A-5253CCCE0B9E@openstack.org> Hi StarlingX Community, Nominations for the 5 Technical Steering Committee positions and all PL/TL positions are now open and will remain open until __October 13, 2020 20:45 UTC__. All nominations must be submitted as a text file to the starlingx/election repository as explained on the election website[1]. Please note that the name of the file should match the email address in your Gerrit configuration. Candidates for the Technical Steering Committee Positions: Any contributing community member who is an individual member of the Foundation can propose their candidacy for an available, directly-elected TSC seat. Candidates for the Project Lead Positions: Any contributing community member who is an individual member of the Foundation can propose their candidacy for the project PL seat. Candidates for the Technical Lead Positions: Any contributing community member who is an individual member of the Foundation and is a core reviewer to the given project can propose their candidacy for the project TL seat. The election will be held from October 20, 2020 20:45 UTC through to October 27, 2020 20:45 UTC. The electorate of the TSC election are the community members that are also contributors for one of the official teams[2] or served in a leadership role (TSC, PL, TL) over the 12-month timeframe October 6, 2019 to October 6, 2020, as well as the contributors who are acknowledged by the TSC. The electorate of the PL/TL election are the community members that are contributors for the given official team[2] over the 12-month timeframe October 6, 2019 to October 6, 2020, as well as the contributors who are acknowledged by the TSC. Please see the website[3] for additional details about this election. Please find below the timeline: TSC nomination starts @ October 6, 2020 20:45 UTC TSC nomination ends @ October 13, 2020 20:45 UTC PL/TL nomination starts @ October 6, 2020 20:45 UTC PL/TL nomination ends @ October 13, 2020 20:45 UTC TSC campaigning starts @ October 13, 2020 20:45 UTC TSC campaigning ends @ October 20, 2020 20:45 UTC TSC election starts @ October 20, 2020 20:45 UTC TSC election ends @ October 27, 2020 20:45 UTC PL/TL election starts @ October 20, 2020 20:45 UTC PL/TL election ends @ October 27, 2020 20:45 UTC If you have any questions please be sure to either ask them on the mailing list or to the elections officials[4]. Thank you, [1] https://docs.starlingx.io/election/#how-to-submit-a-candidacy [2] https://docs.starlingx.io/governance/reference/tsc/projects/index.html [3] https://docs.starlingx.io/election/ [4] https://docs.starlingx.io/election/#election-officials From Sriram.Dharwadkar at commscope.com Wed Oct 7 04:44:25 2020 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Wed, 7 Oct 2020 04:44:25 +0000 Subject: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded In-Reply-To: References: , , Message-ID: Hi Eric, I was able to figure out the issue. Thanks for the pointers. While bringing up data network, I had used the below command for the both the workers as I wanted sriov interfaces. * system host-if-modify $NODE -n sriov0 -c pci-sriov -N 8 --vf-driver=netdevice $DATA0IFUUID I m not sure if this is the reason the why worker nodes were in disabled and offline state. When I changed the class from pci-sriov to “data”, worker nodes are enabled and operational status is available. * system host-if-modify -m 1500 -n data0 -c data worker-0 2ded1706-1cd3-43d3-80d2-c7e0ce2b79c Do you see any problem in first command. Regards, Sriram From: MacDonald, Eric Sent: Tuesday, October 6, 2020 12:36 AM To: Dharwadkar, Sriram Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded Sriram, For the worker node configuration failures, look or Error Warn logs in /var/log/puppet/* logs. Those logs should help you understand what config failed. Eric. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ External (eric.macdonald at windriver.com) Report This Email FAQ Protection by INKY Sriram, For the worker node configuration failures, look or Error Warn logs in /var/log/puppet/* logs. Those logs should help you understand what config failed. Eric. ________________________________ From: Dharwadkar, Sriram > Sent: Monday, October 5, 2020 2:57 PM To: MacDonald, Eric > Subject: RE: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded Hi Eric, Thanks for the explanation. Initially I was planning to bring up 2 controller (All in one stand alone) + 1 worker node. For this kind of configuration, we would need high capacity h/w for all the nodes which may not be available. That’s the reason I deployed 1 standard controller -with storage + 2 worker nodes configuration. Controller node came up w/o any issues, worker nodes also came up after system host-unlock worker-0 and worker-1. But I could see these alarms * worker-1 experienced a service-affecting failure. Auto-recovery | host=worker-1 | critical | 2020-10-05T | in progress. Manual Lock and Unlock may be required if auto-recovery is unsuccessful * worker-1 experienced a configuration failure. * worker-0 experienced a service-affecting failure. Auto-recovery | host=worker-0 | critical | 2020-10-05T | in progress. Manual Lock and Unlock may be required if auto-recovery is unsuccessful * worker-0 experienced a configuration failure. I did lock and unlock of nodes. After nodes came up, I still see the same issue. How to check for configuration failures? Also there is taint on both the worker nodes “Taints: services=disabled:NoExecute”, which I could see using kubectl describe node worker-0 and kubectl describe node worker-1. So, I don’t see Sriov CNI and Sriov DP being deployed in worker nodes. Regards, Sriram From: MacDonald, Eric > Sent: Monday, October 5, 2020 9:14 PM To: Dharwadkar, Sriram > Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded Sriram, The AIO controller has a lot more work to do during startup as it contains both control and compute functions. As a result, you may temporarily see CPU resource utilization logs during the latter stages of the provisioning. Now that you went to standard config, you might see another degrade for a short period of time following the unlock of the second controller during initial filesystem sync. Again, degrade is general, but if you see a degrade there should be an alarm that represents the reason for the degrade. Eric. ________________________________ From: Dharwadkar, Sriram > Sent: Monday, October 5, 2020 10:44 AM To: MacDonald, Eric > Subject: RE: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded Thanks Eric. I reinstalled the system with iso again. I didn’t see this error again with different configuration (I selected standard controller, instead of all in one duplex). I will check for alarms in case of any issues further. Regards, Sriram From: MacDonald, Eric > Sent: Monday, October 5, 2020 5:55 PM To: Dharwadkar, Sriram > Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded What does 'fm alarm-list' show ? As a general rule ; if a host is degraded there should be an alarm raised for that degraded condition. Eric MacDonald StarlingX Maintenance ________________________________ From: Dharwadkar, Sriram > Sent: Monday, October 5, 2020 4:00 AM To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded Hi, I have installed distributed starlingx 4.0 in "All in one Duplex" mode. There are two nodes in the central cloud and two in the edge cloud. Central cloud is up and running. For the edge cloud configuration, in the bootstrap override file, I have configured the private registry. From the central cloud, I was able to add the edge cloud. Images required for starlingX installation are downloaded from private registry and installation goes through w/o any issues. [sysadmin at controller-0 ~(keystone_admin)]$ dcmanager subcloud list +----+------+------------+--------------+---------------+---------+ | id | name | management | availability | deploy status | sync | +----+------+------------+--------------+---------------+---------+ | 47 | edge | unmanaged | offline | complete | unknown | +----+------+------------+--------------+---------------+---------+ [sysadmin at controller-0 ~(keystone_admin)]$ Then I followed the steps mentioned in the document https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/aio_duplex_install_kubernetes.html#configure-controller-0 And finally did unlock of controller-0. System went for reboot and it came up successfully. After I see availability as “degraded” [sysadmin at controller-0 log(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | degraded | +----+--------------+-------------+----------------+-------------+--------------+ Tail -f /var/log/sysinv.log shows – prerequisites not met.. ysinv 2020-10-05 07:57:12.271 96246 INFO ceph_client [-] Result: {u'waiting': [], u'has_failed': False, u'state': u'success', u'is_waiting': False, u'running': [], u'failed': [], u'finished': [{u'outb': u'{"fsid":"50634828-68b2-43c4-aaa0-ebf53f6e675a","health":{"checks":{},"status":"HEALTH_OK","overall_status":"HEALTH_WARN"},"election_epoch":7,"quorum":[0],"quorum_names":["controller"],"monmap":{"epoch":1,"fsid":"50634828-68b2-43c4-aaa0-ebf53f6e675a","modified":"2020-10-05 06:53:11.461060","created":"2020-10-05 06:53:11.461060","features":{"persistent":["kraken","luminous","mimic","osdmap-prune"],"optional":[]},"mons":[{"rank":0,"name":"controller","addr":"192.168.22.101:6789/0","public_addr":"192.168.22.101:6789/0"}]},"osdmap":{"osdmap":{"epoch":10,"num_osds":1,"num_up_osds":1,"num_in_osds":1,"full":false,"nearfull":false,"num_remapped_pgs":0}},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":112181248,"bytes_avail":1197865828352,"bytes_total":1197978009600},"fsmap":{"epoch":1,"by_rank":[]},"mgrmap":{"epoch":48,"active_gid":24132,"active_name":"controller-0","active_addr":"192.168.22.102:6804/93283","available":true,"standbys":[],"modules":["restful"],"available_modules":[{"name":"balancer","can_run":true,"error_string":""},{"name":"dashboard","can_run":false,"error_string":"Frontend assets not found: incomplete build?"},{"name":"hello","can_run":true,"error_string":""},{"name":"iostat","can_run":true,"error_string":""},{"name":"localpool","can_run":true,"error_string":""},{"name":"prometheus","can_run":true,"error_string":""},{"name":"restful","can_run":true,"error_string":""},{"name":"selftest","can_run":true,"error_string":""},{"name":"smart","can_run":true,"error_string":""},{"name":"status","can_run":true,"error_string":""},{"name":"telegraf","can_run":true,"error_string":""},{"name":"telemetry","can_run":true,"error_string":""},{"name":"zabbix","can_run":true,"error_string":""}],"services":{"restful":"https://controller-0:7999/"}},"servicemap":{"epoch":1,"modified":"0.000000","services":{}}}\n', u'outs': u'', u'command': u'status format=json'}], u'is_finished': True, u'id': u'140404196232080'} sysinv 2020-10-05 07:57:12.284 96246 INFO sysinv.conductor.manager [-] Platform managed application platform-integ-apps: Prerequisites not met. sysinv 2020-10-05 07:57:12.286 96246 INFO sysinv.conductor.manager [-] Platform managed application oidc-auth-apps: Prerequisites not met. sysinv 2020-10-05 07:57:12.291 96246 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None sysinv 2020-10-05 07:57:12.293 96246 INFO sysinv.ap I’m not sure why availability is shown as degraded. Any help would be appreciated. Let me know if any logs are required. Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Oct 7 12:04:01 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 7 Oct 2020 12:04:01 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Oct 7, 2020) Message-ID: Hi all, reminder of the TSC/Community call coming up later today. One of the topics today will be the election for TSC and PL/TL members, for which we're now in the nomination period. Please feel free to add items to the agenda [0] for the community call & don't forget to use the (still fairly) new Zoom dial in [3]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20201007T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Barton.Wensley at windriver.com Wed Oct 7 12:15:56 2020 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Wed, 7 Oct 2020 12:15:56 +0000 Subject: [Starlingx-discuss] Updated invitation: Weekly StarlingX mtg 7:00am Pacific Timezone @ Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 (CST) (starlingx-discuss@lists.starlingx.io) In-Reply-To: <1AFAFE4A-76C5-4EF0-82D0-19DB9475B229@openstack.org> References: <000000000000c1ea7905b0ffd8fa@google.com>, <1AFAFE4A-76C5-4EF0-82D0-19DB9475B229@openstack.org> Message-ID: Thanks Ildikó - can you or Claire please send out a corrected calendar invite so we can insert it in our calendars again? Not sure about everyone else, but Claire's invite resulted in the meeting disappearing completely from my calendar. Bart ________________________________ From: Ildiko Vancsa Sent: Tuesday, October 6, 2020 9:34 AM To: StarlingX ML Subject: Re: [Starlingx-discuss] Updated invitation: Weekly StarlingX mtg 7:00am Pacific Timezone @ Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 (CST) (starlingx-discuss at lists.starlingx.io) Hi, Please note that Claire’s calendar invite was an ld one with old Zoom information that she has just removed. The TSC and Community calls are running __unchanged on Wednesdays at 7am Pacific / 1400 UTC__. For up to date meeting and dial in information please see the Meetings wiki page here: https://wiki.openstack.org/wiki/Starlingx/Meetings Please let me know if you have any questions. Thanks, Ildikó > On Oct 6, 2020, at 14:36, claire at openstack.org wrote: > > This event has been changed. > Weekly StarlingX mtg 7:00am Pacific Timezone > When > Changed: Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 Central Time - Chicago > Where > https://zoom.us/j/342730236 (map) > Calendar > starlingx-discuss at lists.starlingx.io > Who > • > claire at openstack.org - organizer > • > Ildiko Vancsa > • > starlingx-discuss at lists.starlingx.io > more details » > Agenda & Notes: https://wiki.openstack.org/wiki/StarlingX#Meetings > Going (starlingx-discuss at lists.starlingx.io)? All events in this series: Yes - Maybe - No more options » > Invitation from Google Calendar > > You are receiving this courtesy email at the account starlingx-discuss at lists.starlingx.io because you are an attendee of this event. > > To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. > > Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn More. > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openstack.org Wed Oct 7 12:23:15 2020 From: ildiko at openstack.org (Ildiko Vancsa) Date: Wed, 7 Oct 2020 14:23:15 +0200 Subject: [Starlingx-discuss] Updated invitation: Weekly StarlingX mtg 7:00am Pacific Timezone @ Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 (CST) (starlingx-discuss@lists.starlingx.io) In-Reply-To: References: <000000000000c1ea7905b0ffd8fa@google.com> <1AFAFE4A-76C5-4EF0-82D0-19DB9475B229@openstack.org> Message-ID: <3DDBA110-1BA6-453E-A8C1-166409D09328@openstack.org> Hi Bart, I will bring this up on the community call today to see if Bill would want to own the invite as he is running those meetings. Thanks, Ildikó > On Oct 7, 2020, at 14:15, Wensley, Barton wrote: > > Thanks Ildikó - can you or Claire please send out a corrected calendar invite so we can insert it in our calendars again? Not sure about everyone else, but Claire's invite resulted in the meeting disappearing completely from my calendar. > > Bart > > From: Ildiko Vancsa > Sent: Tuesday, October 6, 2020 9:34 AM > To: StarlingX ML > Subject: Re: [Starlingx-discuss] Updated invitation: Weekly StarlingX mtg 7:00am Pacific Timezone @ Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 (CST) (starlingx-discuss at lists.starlingx.io) > > Hi, > > Please note that Claire’s calendar invite was an ld one with old Zoom information that she has just removed. > > The TSC and Community calls are running __unchanged on Wednesdays at 7am Pacific / 1400 UTC__. For up to date meeting and dial in information please see the Meetings wiki page here:https://wiki.openstack.org/wiki/Starlingx/Meetings > > Please let me know if you have any questions. > > Thanks, > Ildikó > > > > On Oct 6, 2020, at 14:36, claire at openstack.org wrote: > > > > This event has been changed. > > Weekly StarlingX mtg 7:00am Pacific Timezone > > When > > Changed: Weekly from 9am to 10am on Wednesday from Wed Nov 7, 2018 to Tue Oct 6 Central Time - Chicago > > Where > > https://zoom.us/j/342730236 (map) > > Calendar > > starlingx-discuss at lists.starlingx.io > > Who > > • > > claire at openstack.org - organizer > > • > > Ildiko Vancsa > > • > > starlingx-discuss at lists.starlingx.io > > more details » > > Agenda & Notes: https://wiki.openstack.org/wiki/StarlingX#Meetings > > Going (starlingx-discuss at lists.starlingx.io)? All events in this series: Yes - Maybe - No more options » > > Invitation from Google Calendar > > > > You are receiving this courtesy email at the account starlingx-discuss at lists.starlingx.io because you are an attendee of this event. > > > > To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. > > > > Forwarding this invitation could allow any recipient to send a response to the organizer and be added to the guest list, or invite others regardless of their own invitation status, or to modify your RSVP. Learn More. > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bnovickovs at weecodelab.com Wed Oct 7 13:05:09 2020 From: bnovickovs at weecodelab.com (bnovickovs at weecodelab.com) Date: Wed, 07 Oct 2020 14:05:09 +0100 Subject: [Starlingx-discuss] DRBD9 and Linstor Message-ID: <5fa21417a80517042ca2c7f3917242bd@weecodelab.com> Have there ever been discussions about DRBD9 support and Linstor? https://www.linbit.com/linstor/ According to those articles: https://vitobotta.com/2019/08/06/kubernetes-storage-openebs-rook-longhorn-storageos-robin-portworx/ & https://vitobotta.com/2019/08/07/linstor-storage-with-kubernetes/ Linstor seems to be very fast and reliable when it comes to hyper-converged systems. Thank you From build.starlingx at gmail.com Wed Oct 7 14:19:41 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 7 Oct 2020 10:19:41 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_container_setup_layered - Build # 974 - Failure! Message-ID: <691586327.187.1602080382007.JavaMail.javamailuser@localhost> Project: STX_BUILD_container_setup_layered Build #: 974 Status: Failure Timestamp: 20201007T141934Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20201005T230407Z/logs -------------------------------------------------------------------------------- Parameters PROJECT: master-containers MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20201005T230407Z DOCKER_BUILD_ID: jenkins-master-containers-20201005T230407Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20201005T230407Z/logs DOCKER_BUILD_TAG: master-containers-20201005T230407Z-builder-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20201005T230407Z/logs LAYER: containers MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers From bruce.e.jones at intel.com Wed Oct 7 14:21:01 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 7 Oct 2020 14:21:01 +0000 Subject: [Starlingx-discuss] DRBD9 and Linstor In-Reply-To: <5fa21417a80517042ca2c7f3917242bd@weecodelab.com> References: <5fa21417a80517042ca2c7f3917242bd@weecodelab.com> Message-ID: Interesting, thank you for sharing these links! We are in the final stages of changing the storage subsystem in StarlingX from Ceph to Rook/Ceph. We agree with the posts below that Rook has a lot of advantages. We have no plans that I know of at this time to make further changes in this space. brucej -----Original Message----- From: bnovickovs at weecodelab.com Sent: Wednesday, October 7, 2020 6:05 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] DRBD9 and Linstor Have there ever been discussions about DRBD9 support and Linstor? https://www.linbit.com/linstor/ According to those articles: https://vitobotta.com/2019/08/06/kubernetes-storage-openebs-rook-longhorn-storageos-robin-portworx/ & https://vitobotta.com/2019/08/07/linstor-storage-with-kubernetes/ Linstor seems to be very fast and reliable when it comes to hyper-converged systems. Thank you _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Wed Oct 7 14:54:12 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 7 Oct 2020 14:54:12 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Oct 7, 2020) In-Reply-To: References: Message-ID: >From today's call... * Standing Topics * Sanity * nothing since last week - some build issues that Scott's looking into * Gerrit Reviews in Need of Attention * nothing this week * Topics for this Week * Elections * nomination period has started: http://lists.starlingx.io/pipermail/starlingx-discuss/2020-October/009745.html * electorate: https://etherpad.opendev.org/p/stx-fall-2020-elections * New meeting invite for the community calls? * AR: Bill will add an invite * Issue with stx.3.0.1 (Nic) * Provision fails in stx3.0.1 for Standard configuration * https://bugs.launchpad.net/starlingx/+bug/1897896 * Nic will ask Zhipeng to have a look at this when he's back * Initial SPEC for SDO integration on Starlingx (Poornima) * spec is up for review at https://review.opendev.org/#/c/750636/ * suggestion was made to approach the Containers team about how to deploy the App * ARs from Previous Meetings * nothing this week * Open Requests for Help * Distruted StarlingX 4.0 - Worker nodes not booting up * http://lists.starlingx.io/pipermail/starlingx-discuss/2020-October/009733.html * PXE boot issue - if anyone can help with this, please respond to this item * Openstack: Correct configuration with 1 interface available for data network * http://lists.starlingx.io/pipermail/starlingx-discuss/2020-September/009679.html * still pending on Networking team to weigh in on this * Build Matters (if required) * nothing to discuss here this week -----Original Message----- From: Zvonar, Bill Sent: Wednesday, October 7, 2020 8:04 AM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (Oct 7, 2020) Hi all, reminder of the TSC/Community call coming up later today. One of the topics today will be the election for TSC and PL/TL members, for which we're now in the nomination period. Please feel free to add items to the agenda [0] for the community call & don't forget to use the (still fairly) new Zoom dial in [3]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20201007T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Bill.Zvonar at windriver.com Wed Oct 7 14:54:16 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 7 Oct 2020 14:54:16 +0000 Subject: [Starlingx-discuss] StarlingX TSC & Community Call Message-ID: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1925 bytes Desc: not available URL: From Bill.Zvonar at windriver.com Wed Oct 7 15:29:43 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 7 Oct 2020 15:29:43 +0000 Subject: [Starlingx-discuss] StarlingX TSC & Community Call Message-ID: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2167 bytes Desc: not available URL: From maryx.camp at intel.com Wed Oct 7 20:38:36 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 7 Oct 2020 20:38:36 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 2020-10-07 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 2020-10-07 All -- reviews merged since last meeting: 1 (+2 cherry picks from master into R4 branch) All -- bug status -- 10 total - team agrees to defer all low priority LP until the upstreaming effort is completed. 4 LP submitted by https://launchpad.net/~leiyuehui against API documentation, which is generated from source code. They have self-assigned and are submitting reviews. (low priority). Upstreaming WR docs status Dealing with content that is both upstream/downstream - question from last week. How should we handle content that is WR-specific but embedded in shared files? For example, a topic has a feature with subsections and 1 of them is a WR-only item. May use includes or other tags to work around this. Ron will investigate and let us know recommendations. Some of the display issues in the draft upstreaming reviews that we discussed last week are resolved. New patch sets will be coming to fix those issues. New patch sets to existing reviews are preferred, to keep comment continuity. In future, we will probably need a longer meeting to go over the reviews in more detail. TOC mapping discussion (updated doc from Juanita) The text in purple indicates new topics, since Greg created the comparison docs with an earlier release of WRCP. We want to discuss the features/functionality of STX December release as compared to the WRCP GA release also in December. This may impact which doc content is published on the STX website. For example, if GA release supports a feature that is not supported on STX, then we wouldn't want to confuse STX users by describing it. Discuss with Greg and team about the best way to align content with functionality. Display multiple versions of STX docs - no update this week From kennelson11 at gmail.com Wed Oct 7 20:40:54 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 7 Oct 2020 13:40:54 -0700 Subject: [Starlingx-discuss] vPTG Oct 2020 Registration & Schedule Message-ID: Hey everyone, The October 2020 Project Teams Gathering is right around the corner! The official schedule has now been posted on the PTG website [1], the PTGbot has been updated[2], and we have also attached it to this email. Friendly reminder, if you have not already registered, please do so [3]. It is important that we get everyone to register for the event as this is how we will contact you about tooling information/passwords and other event details. Please let us know if you have any questions. Cheers, The Kendalls (diablo_rojo & wendallkaters) [1] PTG Website www.openstack.org/ptg [2] PTGbot: http://ptg.openstack.org/ptg.html [3] PTG Registration: https://october2020ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PTG2-Oct26-30-2020_Schedule (1).pdf Type: application/pdf Size: 706133 bytes Desc: not available URL: From yadav.akshay58 at gmail.com Thu Oct 8 09:25:03 2020 From: yadav.akshay58 at gmail.com (Akki yadav) Date: Thu, 8 Oct 2020 14:55:03 +0530 Subject: [Starlingx-discuss] StarlingX R4.0 Baremetal Duplex standard controller storage ironic deployment mode: Baremetal node not creation issue. Message-ID: Hello Team, I hope all are well. *Setup:* StarlingX R4.0 Baremetal Duplex standard controller storage ironic deployment mode. (I followed the official starlingX documentation for deploying the setup.) *Issue*: At the time of "openstack server create" for launching baremetal node, I came across the following multiple observations: - Sometimes when I launch baremetal node on openstack, after one time pxe booting, the baremetal node goes down again and then comes up and goes into second time booting and gets stuck there in "Probing" state ( Seen on node's console) *BUT* according to openstack horizon, it is up and running and according to "openstack baremetal node show", it is in "Active" state. - And sometimes when i launch baremetal node on openstack, after one time pxe booting, the baremetal node goes down again and then comes up, the "spawning" state on openstack horizon goes into ERROR. Error seen in "nova-compute-ironic-0" container is : "ERROR nova.compute.manager [instance: edd447c6-12ac-49ba-b0bc-f419aff4892a] nova.exception.InstanceDeployFailure: Failed to provision instance edd447c6-12ac-49ba-b0bc-f419aff4892a: Timeout reached while waiting for callback for node 75210cc4-ad98-442d-ace1-89ce69467580" - The baremetal node always takes near about 2 hours to be in "available" state from "cleaning" and "clean-wait". Is it correct behaviour ? Please guide me how to resolve this. Regards Akshay -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Thu Oct 8 15:20:56 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 8 Oct 2020 15:20:56 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20201008T013223Z Message-ID: Sanity Test from 2020-October-08 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201008T013223Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201008T013223Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D10733.2D2570D0] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 20512 bytes Desc: image003.png URL: From Frank.Miller at windriver.com Thu Oct 8 20:19:11 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 8 Oct 2020 20:19:11 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Meeting Message-ID: We will have a containerization meeting on Tuesday Oct 13th - I am sending this invite out to give plenty of advance notice as we haven't had a containerization meeting in awhile. Proposed agenda is below. If you have other topics please add to the etherpad agenda. Etherpad: https://etherpad.openstack.org/p/stx-containerization Zoom: https://zoom.us/j/342730236 Passcode: 419405 Agenda for Oct 13th meeting: 1. FM containerization updates [Sharath Kumar] 2. SDO integration feature https://review.opendev.org/#/c/750636/ [Poornima] 3. Other topics ------ Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2350 bytes Desc: not available URL: From Frank.Miller at windriver.com Thu Oct 8 20:24:18 2020 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 8 Oct 2020 20:24:18 +0000 Subject: [Starlingx-discuss] StarlingX Containerization Meeting Message-ID: Update with zoom link that includes password. We will have a containerization meeting on Tuesday Oct 13th - I am sending this invite out to give plenty of advance notice as we haven't had a containerization meeting in awhile. Proposed agenda is below. If you have other topics please add to the etherpad agenda. Etherpad: https://etherpad.openstack.org/p/stx-containerization Zoom: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 Agenda for Oct 13th meeting: 1. FM containerization updates [Sharath Kumar] 2. SDO integration feature https://review.opendev.org/#/c/750636/ [Poornima] 3. Other topics ------ Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2498 bytes Desc: not available URL: From build.starlingx at gmail.com Fri Oct 9 05:13:21 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 9 Oct 2020 01:13:21 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1167 - Failure! Message-ID: <1672872095.192.1602220402444.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1167 Status: Failure Timestamp: 20201009T050837Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20201009T043000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20201009T043000Z DOCKER_BUILD_ID: jenkins-master-20201009T043000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20201009T043000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20201009T043000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/monolithic From Bill.Zvonar at windriver.com Fri Oct 9 12:04:13 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Fri, 9 Oct 2020 12:04:13 +0000 Subject: [Starlingx-discuss] StarlingX TSC, PL, TL election - Nomination period started In-Reply-To: <6582479F-20E7-4938-882A-5253CCCE0B9E@openstack.org> References: <6582479F-20E7-4938-882A-5253CCCE0B9E@openstack.org> Message-ID: Following up on this, if you expect that you should be part of the electorate for these elections, please check the list at https://etherpad.opendev.org/p/stx-fall-2020-elections. If you're not on that list, please reach out to myself, Ildiko or Yong. -----Original Message----- From: Ildiko Vancsa Sent: Tuesday, October 6, 2020 4:45 PM To: StarlingX ML Subject: [Starlingx-discuss] StarlingX TSC, PL, TL election - Nomination period started Hi StarlingX Community, Nominations for the 5 Technical Steering Committee positions and all PL/TL positions are now open and will remain open until __October 13, 2020 20:45 UTC__. All nominations must be submitted as a text file to the starlingx/election repository as explained on the election website[1]. Please note that the name of the file should match the email address in your Gerrit configuration. Candidates for the Technical Steering Committee Positions: Any contributing community member who is an individual member of the Foundation can propose their candidacy for an available, directly-elected TSC seat. Candidates for the Project Lead Positions: Any contributing community member who is an individual member of the Foundation can propose their candidacy for the project PL seat. Candidates for the Technical Lead Positions: Any contributing community member who is an individual member of the Foundation and is a core reviewer to the given project can propose their candidacy for the project TL seat. The election will be held from October 20, 2020 20:45 UTC through to October 27, 2020 20:45 UTC. The electorate of the TSC election are the community members that are also contributors for one of the official teams[2] or served in a leadership role (TSC, PL, TL) over the 12-month timeframe October 6, 2019 to October 6, 2020, as well as the contributors who are acknowledged by the TSC. The electorate of the PL/TL election are the community members that are contributors for the given official team[2] over the 12-month timeframe October 6, 2019 to October 6, 2020, as well as the contributors who are acknowledged by the TSC. Please see the website[3] for additional details about this election. Please find below the timeline: TSC nomination starts @ October 6, 2020 20:45 UTC TSC nomination ends @ October 13, 2020 20:45 UTC PL/TL nomination starts @ October 6, 2020 20:45 UTC PL/TL nomination ends @ October 13, 2020 20:45 UTC TSC campaigning starts @ October 13, 2020 20:45 UTC TSC campaigning ends @ October 20, 2020 20:45 UTC TSC election starts @ October 20, 2020 20:45 UTC TSC election ends @ October 27, 2020 20:45 UTC PL/TL election starts @ October 20, 2020 20:45 UTC PL/TL election ends @ October 27, 2020 20:45 UTC If you have any questions please be sure to either ask them on the mailing list or to the elections officials[4]. Thank you, [1] https://docs.starlingx.io/election/#how-to-submit-a-candidacy [2] https://docs.starlingx.io/governance/reference/tsc/projects/index.html [3] https://docs.starlingx.io/election/ [4] https://docs.starlingx.io/election/#election-officials _______________________________________________ Starlingx-discuss mailing list mailto:Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From alexandru.dimofte at intel.com Sat Oct 10 14:28:47 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sat, 10 Oct 2020 14:28:47 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20201010T013232Z Message-ID: Sanity Test from 2020-October-10 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201010T013232Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201010T013232Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D10733.2D2570D0] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 20512 bytes Desc: image002.png URL: From ildiko at openstack.org Sun Oct 11 10:44:35 2020 From: ildiko at openstack.org (Ildiko Vancsa) Date: Sun, 11 Oct 2020 12:44:35 +0200 Subject: [Starlingx-discuss] StarlingX TSC, PL, TL election - Nomination period started In-Reply-To: <6582479F-20E7-4938-882A-5253CCCE0B9E@openstack.org> References: <6582479F-20E7-4938-882A-5253CCCE0B9E@openstack.org> Message-ID: <50BEC8C1-10F9-4391-8811-29D102C2E064@openstack.org> Hi StarlingX Community, It is a friendly reminder that the nomination period of the 2020 H2 StarlingX TSC/PL/TL election is open until __October 13, 2020 20:45 UTC__. If you would like to run for one of the 5 open TSC seats or project PL or TL position you can find details on how to submit your candidacy on the election web page[1]. In case you have any questions please respond to this thread or reach out to the election officials[2]. Thank you, [1] https://docs.starlingx.io/election/#how-to-submit-a-candidacy [2] https://docs.starlingx.io/election/#election-officials > On Oct 6, 2020, at 22:45, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > Nominations for the 5 Technical Steering Committee positions and all PL/TL positions are now open and will remain open until __October 13, 2020 20:45 UTC__. > > All nominations must be submitted as a text file to the starlingx/election repository as explained on the election website[1]. > > Please note that the name of the file should match the email address in your Gerrit configuration. > > Candidates for the Technical Steering Committee Positions: Any contributing community member who is an individual member of the Foundation can propose their candidacy for an available, directly-elected TSC seat. > > Candidates for the Project Lead Positions: Any contributing community member who is an individual member of the Foundation can propose their candidacy for the project PL seat. > > Candidates for the Technical Lead Positions: Any contributing community member who is an individual member of the Foundation and is a core reviewer to the given project can propose their candidacy for the project TL seat. > > The election will be held from October 20, 2020 20:45 UTC through to October 27, 2020 20:45 UTC. > > The electorate of the TSC election are the community members that are also contributors for one of the official teams[2] or served in a leadership role (TSC, PL, TL) over the 12-month timeframe October 6, 2019 to October 6, 2020, as well as the contributors who are acknowledged by the TSC. > > The electorate of the PL/TL election are the community members that are contributors for the given official team[2] over the 12-month timeframe October 6, 2019 to October 6, 2020, as well as the contributors who are acknowledged by the TSC. > > Please see the website[3] for additional details about this election. > Please find below the timeline: > > TSC nomination starts @ October 6, 2020 20:45 UTC > TSC nomination ends @ October 13, 2020 20:45 UTC > PL/TL nomination starts @ October 6, 2020 20:45 UTC > PL/TL nomination ends @ October 13, 2020 20:45 UTC > > TSC campaigning starts @ October 13, 2020 20:45 UTC > TSC campaigning ends @ October 20, 2020 20:45 UTC > > TSC election starts @ October 20, 2020 20:45 UTC > TSC election ends @ October 27, 2020 20:45 UTC > PL/TL election starts @ October 20, 2020 20:45 UTC > PL/TL election ends @ October 27, 2020 20:45 UTC > > If you have any questions please be sure to either ask them on the mailing list or to the elections officials[4]. > > Thank you, > > [1] https://docs.starlingx.io/election/#how-to-submit-a-candidacy > [2] https://docs.starlingx.io/governance/reference/tsc/projects/index.html > [3] https://docs.starlingx.io/election/ > [4] https://docs.starlingx.io/election/#election-officials > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ebiibe82 at gmail.com Mon Oct 12 12:39:58 2020 From: ebiibe82 at gmail.com (Amit Mahajan) Date: Mon, 12 Oct 2020 18:09:58 +0530 Subject: [Starlingx-discuss] Regarding stx-monitor Message-ID: Hi All, We are using StarlingX R4.0. For analyzing a few issues, we are exploring the stx-monitor application. Could you please let us know the following: - Is there any documentation that can guide how to install the stx-monitor application? - Will we be able to get logs for the OpenStack pods that died and were removed? - Does stx-monitor monitors and records metrics such as CPU and Memory usage, and are these available for post mortem analysis? - Does stx-monitor also periodically monitor hosts' (controllers & worker nodes') CPU, Memory etc. and are these metrics available for post mortem analysis? Regards, Amit -------------- next part -------------- An HTML attachment was scrubbed... URL: From sriram.ec at gmail.com Mon Oct 12 12:52:55 2020 From: sriram.ec at gmail.com (Sriram) Date: Mon, 12 Oct 2020 18:22:55 +0530 Subject: [Starlingx-discuss] Distruted StarlingX 4.0 - Worker nodes not booting up In-Reply-To: References: Message-ID: I was able to resolve the issue. Workers nodes are up and running. Documentation below says that we should use port-based VLAN for pxe booting, where as I had configured trunk VLAN for pxe network. With port-based VLAN things are fine. https://docs.starlingx.io/configuration/host_interface_network_config.html The pxeboot network is an optional network required in scenarios where the > mgmt network cannot be used for PXE booting of hosts. For example, use > the pxeboot network when the mgmt network needs to be IPv6 (not currently > supported for PXE booting). In these scenarios, the PXE boot network uses a > dedicated VLAN (port-based), and the mgmt network uses a separate > dedicated VLAN (tagged) on the same port. Thanks, Sriram On Fri, Oct 2, 2020 at 8:40 AM Sriram wrote: > Hi, > > I'm using rel-20.06 software of starlingX-4.0. Initial connection happens > to pxe server on controller-0. I do see some packets between worker node > and controller-0. Below tcpdump shows those packets > > [root at controller-0 ~(keystone_admin)]# tcpdump -i any port 69 or port 53 >> or port 67 -nn >> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode >> listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 >> bytes >> 10:42:58.263992 ethertype IPv4, IP 0.0.0.0.68 > 255.255.255.255.67: >> BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 >> 10:42:58.263992 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request >> from f0:d4:e2:e9:8e:c4, length 548 >> 10:42:58.264290 IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, >> Reply, length 305 >> 10:42:58.264299 ethertype IPv4, IP 192.168.22.102.67 > >> 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 >> 10:43:02.301357 ethertype IPv4, IP 0.0.0.0.68 > 255.255.255.255.67: >> BOOTP/DHCP, Request from f0:d4:e2:e9:8e:c4, length 548 >> 10:43:02.301357 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request >> from f0:d4:e2:e9:8e:c4, length 548 >> 10:43:02.310717 IP 192.168.22.102.67 > 255.255.255.255.68: BOOTP/DHCP, >> Reply, length 305 >> 10:43:02.310725 ethertype IPv4, IP 192.168.22.102.67 > >> 255.255.255.255.68: BOOTP/DHCP, Reply, length 305 >> 10:43:02.311555 ethertype IPv4, IP 169.254.202.138.2070 > >> 169.254.202.1.69: 27 RRQ "pxelinux.0" octet tsize 0 >> 10:43:02.311555 IP 169.254.202.138.2070 > 169.254.202.1.69: 27 RRQ >> "pxelinux.0" octet tsize 0 >> 10:43:02.311927 ethertype IPv4, IP 169.254.202.138.2071 > >> 169.254.202.1.69: 32 RRQ "pxelinux.0" octet blksize 1456 >> 10:43:02.311927 IP 169.254.202.138.2071 > 169.254.202.1.69: 32 RRQ >> "pxelinux.0" octet blksize 1456 >> 10:43:02.358861 ethertype IPv4, IP 169.254.202.138.49152 > >> 169.254.202.1.69: 79 RRQ >> "pxelinux.cfg/44454c4c-4800-104c-8034-cac04f473333" octet tsize 0 blksize >> 1408 >> .................. >> >> >> >> *10:43:40.347690 ethertype IPv4, IP 169.254.202.138.49156 > >> 169.254.202.1.69: 57 RRQ "rel-20.06/installer-bzImage" octet tsize 0 >> blksize 140810:43:40.347690 IP 169.254.202.138.49156 > 169.254.202.1.69: >> 57 RRQ "rel-20.06/installer-bzImage" octet tsize 0 blksize >> 140810:43:41.035183 ethertype IPv4, IP 169.254.202.138.49157 > >> 169.254.202.1.69: 56 RRQ "rel-20.06/installer-initrd" octet tsize 0 >> blksize 140810:43:41.035183 IP 169.254.202.138.49157 > 169.254.202.1.69: >> 56 RRQ "rel-20.06/installer-initrd" octet tsize 0 blksize 1408* > > > 169.254.202.138 is the worker node ip and 169.254.202.1 is the > controller-0 ip. Above 4 are the last packets exchanged and after that no > communication is seen. Worker node does not proceed further in > installation. > Pxe network in the controller node is on vlan-143 and I have enabled the > same vlan in bios of the worker node. > > Please let me know if any info is required. > > Regards, > Sriram > > > > On Thu, Oct 1, 2020 at 1:25 PM Sriram wrote: > >> Hi, >> >> I'm trying to bring up the edge cloud with 3 nodes (1 controller and 2 >> worker nodes) with starlingX 4.0 - distributed cloud. >> Central cloud is up and running with All in One Duplex 2 controller >> configuration. >> >> I was able to bring up the controller-0 in edge cloud using iso (virtual >> cd/dvd mount) and was able to configure the personality for the other >> nodes as workers. But worker-0 and worker-1 are stuck in pxe boot for more >> than 2hrs. Any suggestions? >> >> In "Standard controller with storage" configuration, is having 2 >> controllers compulsory ? >> >> https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/controller_storage.html >> This document says it supports 2 controllers and upto 10 worker nodes. >> >> Regards, >> Sriram >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openstack.org Mon Oct 12 22:25:37 2020 From: ildiko at openstack.org (Ildiko Vancsa) Date: Tue, 13 Oct 2020 00:25:37 +0200 Subject: [Starlingx-discuss] StarlingX TSC, PL, TL election - Nomination period started In-Reply-To: <50BEC8C1-10F9-4391-8811-29D102C2E064@openstack.org> References: <6582479F-20E7-4938-882A-5253CCCE0B9E@openstack.org> <50BEC8C1-10F9-4391-8811-29D102C2E064@openstack.org> Message-ID: Hi StarlingX Community, It is a friendly reminder that the nomination period of the 2020 H2 StarlingX TSC/PL/TL election is open for less than a day and will end on __October 13, 2020 20:45 UTC__. There are a couple of projects without PL and TL candidates. If you are considering to run for these positions for any of the projects below don’t hesitate and __submit your nomination before the deadline__: * Build * Containers * Distro (non-OpenStack) * Docs * MultiOS * Networking * Test If you would like to run for one of the 5 open TSC seats or project PL or TL position you can find details on how to submit your candidacy on the election web page[1]. In case you have any questions please respond to this thread or reach out to the election officials[2]. Thank you, [1] https://docs.starlingx.io/election/#how-to-submit-a-candidacy [2] https://docs.starlingx.io/election/#election-officials > On Oct 11, 2020, at 12:44, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > It is a friendly reminder that the nomination period of the 2020 H2 StarlingX TSC/PL/TL election is open until __October 13, 2020 20:45 UTC__. > > If you would like to run for one of the 5 open TSC seats or project PL or TL position you can find details on how to submit your candidacy on the election web page[1]. > > In case you have any questions please respond to this thread or reach out to the election officials[2]. > > Thank you, > > [1] https://docs.starlingx.io/election/#how-to-submit-a-candidacy > [2] https://docs.starlingx.io/election/#election-officials > > >> On Oct 6, 2020, at 22:45, Ildiko Vancsa wrote: >> >> Hi StarlingX Community, >> >> Nominations for the 5 Technical Steering Committee positions and all PL/TL positions are now open and will remain open until __October 13, 2020 20:45 UTC__. >> >> All nominations must be submitted as a text file to the starlingx/election repository as explained on the election website[1]. >> >> Please note that the name of the file should match the email address in your Gerrit configuration. >> >> Candidates for the Technical Steering Committee Positions: Any contributing community member who is an individual member of the Foundation can propose their candidacy for an available, directly-elected TSC seat. >> >> Candidates for the Project Lead Positions: Any contributing community member who is an individual member of the Foundation can propose their candidacy for the project PL seat. >> >> Candidates for the Technical Lead Positions: Any contributing community member who is an individual member of the Foundation and is a core reviewer to the given project can propose their candidacy for the project TL seat. >> >> The election will be held from October 20, 2020 20:45 UTC through to October 27, 2020 20:45 UTC. >> >> The electorate of the TSC election are the community members that are also contributors for one of the official teams[2] or served in a leadership role (TSC, PL, TL) over the 12-month timeframe October 6, 2019 to October 6, 2020, as well as the contributors who are acknowledged by the TSC. >> >> The electorate of the PL/TL election are the community members that are contributors for the given official team[2] over the 12-month timeframe October 6, 2019 to October 6, 2020, as well as the contributors who are acknowledged by the TSC. >> >> Please see the website[3] for additional details about this election. >> Please find below the timeline: >> >> TSC nomination starts @ October 6, 2020 20:45 UTC >> TSC nomination ends @ October 13, 2020 20:45 UTC >> PL/TL nomination starts @ October 6, 2020 20:45 UTC >> PL/TL nomination ends @ October 13, 2020 20:45 UTC >> >> TSC campaigning starts @ October 13, 2020 20:45 UTC >> TSC campaigning ends @ October 20, 2020 20:45 UTC >> >> TSC election starts @ October 20, 2020 20:45 UTC >> TSC election ends @ October 27, 2020 20:45 UTC >> PL/TL election starts @ October 20, 2020 20:45 UTC >> PL/TL election ends @ October 27, 2020 20:45 UTC >> >> If you have any questions please be sure to either ask them on the mailing list or to the elections officials[4]. >> >> Thank you, >> >> [1] https://docs.starlingx.io/election/#how-to-submit-a-candidacy >> [2] https://docs.starlingx.io/governance/reference/tsc/projects/index.html >> [3] https://docs.starlingx.io/election/ >> [4] https://docs.starlingx.io/election/#election-officials >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From haochuan.z.chen at intel.com Tue Oct 13 02:14:30 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 13 Oct 2020 02:14:30 +0000 Subject: [Starlingx-discuss] rook patch review for metal project Message-ID: Patch for rook https://review.opendev.org/#/c/737228/ meeting link https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1599 bytes Desc: not available URL: From build.starlingx at gmail.com Tue Oct 13 05:07:37 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Oct 2020 01:07:37 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1179 - Failure! Message-ID: <894131687.205.1602565659158.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 1179 Status: Failure Timestamp: 20201013T050259Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20201013T043000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20201013T043000Z DOCKER_BUILD_ID: jenkins-master-20201013T043000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20201013T043000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20201013T043000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/monolithic From austin.sun at intel.com Tue Oct 13 07:54:38 2020 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 13 Oct 2020 07:54:38 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 10/14/2020 Message-ID: Hi All: Agenda 10/14 meeting: 1) rook ceph: a) discuss rook ceph dedicated storage setup migration solution https://review.opendev.org/#/c/757087/ b) rook ceph patches status https://review.opendev.org/#/q/status:open++branch:master+topic:%22ceph+containerization%22 https://review.opendev.org/#/c/737228/ --- Wait Ovidiu feedback, no feedback for 2 weeks. https://review.opendev.org/#/c/734065/ --- ansible playbook, no review feedback for month 2) open If any more topic , please add into https://etherpad.opendev.org/p/stx-distro-other Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Tue Oct 13 08:19:58 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 13 Oct 2020 08:19:58 +0000 Subject: [Starlingx-discuss] request patch review for rook Message-ID: Hi Ovidiu Don, Penney propose you to review this patch. Please review my patch for rook. https://review.opendev.org/#/c/737228/ I book a meeting to check your concern for my patch, but you didn't accept and not attend the meeting. Please review patch, any concern you can propose time for meeting. Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Tue Oct 13 08:27:33 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 13 Oct 2020 08:27:33 +0000 Subject: [Starlingx-discuss] Rook patch review for project rook-ceph Message-ID: Rook patch review for project rook-ceph https://review.opendev.org/#/c/716792/ meeting link https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2572 bytes Desc: not available URL: From susendra.selvaraj at intel.com Tue Oct 13 12:27:58 2020 From: susendra.selvaraj at intel.com (Selvaraj, Susendra) Date: Tue, 13 Oct 2020 12:27:58 +0000 Subject: [Starlingx-discuss] Debranding reviews please In-Reply-To: References: <6876c3f5-fbb3-8d72-5007-894d1556995c@windriver.com> Message-ID: Hi Yong, Austin, could you please review below patch for +2 https://review.opendev.org/#/c/750042 regards, Susendra. -----Original Message----- From: Selvaraj, Susendra Sent: Tuesday, October 6, 2020 1:31 PM To: Scott Little ; Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Debranding reviews please Hi Scott, we could go ahead to merge patches for - Preparing the tool chain for the switch. Who could give +2 for below patches - https://review.opendev.org/#/c/750467 https://review.opendev.org/#/c/750042 Regards, Susendra. -----Original Message----- From: Scott Little Sent: Friday, September 25, 2020 2:05 AM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Debranding reviews please I should have that for you tomorrow Scott On 2020-09-24 3:50 p.m., Jones, Bruce E wrote: > Scott, thank you for driving this! > > Are there any updates needed to the documentation (starlingx/docs project) as a result of this change? > > brucej > > -----Original Message----- > From: Scott Little > Sent: Thursday, September 24, 2020 12:22 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Debranding reviews please > > I know the debranding topic still says 'wip' but the code is ready for review. > > > Reviews in intended delivery order ... > > > Preparing the tool chain for the switch ... > > https://review.opendev.org/#/c/750041 > > https://review.opendev.org/#/c/750467 > > https://review.opendev.org/#/c/750042 > > https://review.opendev.org/#/c/749974 > > Then as a set  ... rename cgcs-tis-repo to local-repo ... > > https://review.opendev.org/#/c/687401 > > https://review.opendev.org/#/c/749997 > > https://review.opendev.org/#/c/754129 > > And the next set   ... rename cgcs-centos-repo to centos-repo > > https://review.opendev.org/#/c/687403 > > https://review.opendev.org/#/c/749998 > > https://review.opendev.org/#/c/750043 > > Finally > > https://review.opendev.org/#/c/754130 > > > The full set for review is here: > > https://review.opendev.org/#/q/topic:debrand_wip+(status:open+OR+status:merged) > > > Thanks Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From austin.sun at intel.com Tue Oct 13 12:49:27 2020 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 13 Oct 2020 12:49:27 +0000 Subject: [Starlingx-discuss] Debranding reviews please In-Reply-To: References: <6876c3f5-fbb3-8d72-5007-894d1556995c@windriver.com> Message-ID: Done. -----Original Message----- From: Selvaraj, Susendra Sent: Tuesday, October 13, 2020 8:28 PM To: Selvaraj, Susendra ; Scott Little ; Jones, Bruce E ; Hu, Yong ; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Debranding reviews please Hi Yong, Austin, could you please review below patch for +2 https://review.opendev.org/#/c/750042 regards, Susendra. -----Original Message----- From: Selvaraj, Susendra Sent: Tuesday, October 6, 2020 1:31 PM To: Scott Little ; Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Debranding reviews please Hi Scott, we could go ahead to merge patches for - Preparing the tool chain for the switch. Who could give +2 for below patches - https://review.opendev.org/#/c/750467 https://review.opendev.org/#/c/750042 Regards, Susendra. -----Original Message----- From: Scott Little Sent: Friday, September 25, 2020 2:05 AM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Debranding reviews please I should have that for you tomorrow Scott On 2020-09-24 3:50 p.m., Jones, Bruce E wrote: > Scott, thank you for driving this! > > Are there any updates needed to the documentation (starlingx/docs project) as a result of this change? > > brucej > > -----Original Message----- > From: Scott Little > Sent: Thursday, September 24, 2020 12:22 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Debranding reviews please > > I know the debranding topic still says 'wip' but the code is ready for review. > > > Reviews in intended delivery order ... > > > Preparing the tool chain for the switch ... > > https://review.opendev.org/#/c/750041 > > https://review.opendev.org/#/c/750467 > > https://review.opendev.org/#/c/750042 > > https://review.opendev.org/#/c/749974 > > Then as a set  ... rename cgcs-tis-repo to local-repo ... > > https://review.opendev.org/#/c/687401 > > https://review.opendev.org/#/c/749997 > > https://review.opendev.org/#/c/754129 > > And the next set   ... rename cgcs-centos-repo to centos-repo > > https://review.opendev.org/#/c/687403 > > https://review.opendev.org/#/c/749998 > > https://review.opendev.org/#/c/750043 > > Finally > > https://review.opendev.org/#/c/754130 > > > The full set for review is here: > > https://review.opendev.org/#/q/topic:debrand_wip+(status:open+OR+status:merged) > > > Thanks Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From susendra.selvaraj at intel.com Tue Oct 13 14:23:14 2020 From: susendra.selvaraj at intel.com (Selvaraj, Susendra) Date: Tue, 13 Oct 2020 14:23:14 +0000 Subject: [Starlingx-discuss] Debranding reviews please In-Reply-To: References: <6876c3f5-fbb3-8d72-5007-894d1556995c@windriver.com> Message-ID: Thanks Austin. Please help with the below as well. + Davlet, Don. - https://review.opendev.org/#/c/750041 -- needs Davlet or Austim to give 2nd +2 WFL +1 - https://review.opendev.org/#/c/749974 -- needs Don or Austin to give 2nd +2 and WFL +1 Regards, Susendra. -----Original Message----- From: Sun, Austin Sent: Tuesday, October 13, 2020 6:19 PM To: Selvaraj, Susendra ; Scott Little ; Jones, Bruce E ; Hu, Yong Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Debranding reviews please Done. -----Original Message----- From: Selvaraj, Susendra Sent: Tuesday, October 13, 2020 8:28 PM To: Selvaraj, Susendra ; Scott Little ; Jones, Bruce E ; Hu, Yong ; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Debranding reviews please Hi Yong, Austin, could you please review below patch for +2 https://review.opendev.org/#/c/750042 regards, Susendra. -----Original Message----- From: Selvaraj, Susendra Sent: Tuesday, October 6, 2020 1:31 PM To: Scott Little ; Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Debranding reviews please Hi Scott, we could go ahead to merge patches for - Preparing the tool chain for the switch. Who could give +2 for below patches - https://review.opendev.org/#/c/750467 https://review.opendev.org/#/c/750042 Regards, Susendra. -----Original Message----- From: Scott Little Sent: Friday, September 25, 2020 2:05 AM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Debranding reviews please I should have that for you tomorrow Scott On 2020-09-24 3:50 p.m., Jones, Bruce E wrote: > Scott, thank you for driving this! > > Are there any updates needed to the documentation (starlingx/docs project) as a result of this change? > > brucej > > -----Original Message----- > From: Scott Little > Sent: Thursday, September 24, 2020 12:22 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Debranding reviews please > > I know the debranding topic still says 'wip' but the code is ready for review. > > > Reviews in intended delivery order ... > > > Preparing the tool chain for the switch ... > > https://review.opendev.org/#/c/750041 > > https://review.opendev.org/#/c/750467 > > https://review.opendev.org/#/c/750042 > > https://review.opendev.org/#/c/749974 > > Then as a set  ... rename cgcs-tis-repo to local-repo ... > > https://review.opendev.org/#/c/687401 > > https://review.opendev.org/#/c/749997 > > https://review.opendev.org/#/c/754129 > > And the next set   ... rename cgcs-centos-repo to centos-repo > > https://review.opendev.org/#/c/687403 > > https://review.opendev.org/#/c/749998 > > https://review.opendev.org/#/c/750043 > > Finally > > https://review.opendev.org/#/c/754130 > > > The full set for review is here: > > https://review.opendev.org/#/q/topic:debrand_wip+(status:open+OR+status:merged) > > > Thanks Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Saul.Wold at windriver.com Tue Oct 13 14:53:39 2020 From: Saul.Wold at windriver.com (Saul Wold) Date: Tue, 13 Oct 2020 07:53:39 -0700 Subject: [Starlingx-discuss] MultiOS Meeting Notes 10/13/2020 Message-ID: <317aaf3a-1a4d-af38-4ce6-efaa0d4fe218@windriver.com> This file is https://etherpad.openstack.org/p/stx-multios https://meet.jit.si/starlingx-multios This is the meeting notes and agenda file for the StarlingX Multi-OS project MultiOS Team Meeting 10/13/2020 Status Update Patches are getting submitted and reviewed on Gerrit Simplex now reaches enabled and available unlocked All services are active. Sometimes unlocking fails https://bugs.launchpad.net/starlingx/+bug/1899648 Updated build script - needs to be submitted to gerrit Need to revisit image types and installer Needs a story -- Sau! From Sriram.Dharwadkar at commscope.com Tue Oct 13 18:25:46 2020 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Tue, 13 Oct 2020 18:25:46 +0000 Subject: [Starlingx-discuss] Enabling CPU Manager Message-ID: Hi, I have installed StarlingX edge cloud with version 4.0 with one controller + 2 worker nodes. In this setup, how to enable CPU Manager, so that dedicated cpu's can be assigned to containers. I modified the file /var/lib/kubelet/config.yaml, to have below contents and rebooted the worker node featureGates: CPUManager: true cpuManagerPolicy: static cpuManagerReconcilePeriod: 5s systemReserved: cpu: 2000m kubeReserved: cpu: 2000m It has not taken effect. Any suggestions to enable CPU Manage ? Looks like documentation for this is not ready. Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Oct 13 18:34:49 2020 From: scott.little at windriver.com (Scott Little) Date: Tue, 13 Oct 2020 14:34:49 -0400 Subject: [Starlingx-discuss] Debranding reviews please In-Reply-To: References: Message-ID: <481edef5-2f93-0073-d421-c7e194ce141a@windriver.com> Documentation update is ... https://review.opendev.org/#/c/757932/ Scott On 2020-09-24 3:50 p.m., Jones, Bruce E wrote: > Scott, thank you for driving this! > > Are there any updates needed to the documentation (starlingx/docs project) as a result of this change? > > brucej > > -----Original Message----- > From: Scott Little > Sent: Thursday, September 24, 2020 12:22 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Debranding reviews please > > I know the debranding topic still says 'wip' but the code is ready for review. > > > Reviews in intended delivery order ... > > > Preparing the tool chain for the switch ... > > https://review.opendev.org/#/c/750041 > > https://review.opendev.org/#/c/750467 > > https://review.opendev.org/#/c/750042 > > https://review.opendev.org/#/c/749974 > > Then as a set  ... rename cgcs-tis-repo to local-repo ... > > https://review.opendev.org/#/c/687401 > > https://review.opendev.org/#/c/749997 > > https://review.opendev.org/#/c/754129 > > And the next set   ... rename cgcs-centos-repo to centos-repo > > https://review.opendev.org/#/c/687403 > > https://review.opendev.org/#/c/749998 > > https://review.opendev.org/#/c/750043 > > Finally > > https://review.opendev.org/#/c/754130 > > > The full set for review is here: > > https://review.opendev.org/#/q/topic:debrand_wip+(status:open+OR+status:merged) > > > Thanks Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Oct 13 19:20:45 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Oct 2020 15:20:45 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_installer_layered - Build # 257 - Failure! Message-ID: <1844722442.210.1602616847062.JavaMail.javamailuser@localhost> Project: STX_build_installer_layered Build #: 257 Status: Failure Timestamp: 20201013T192043Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201013T183157Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20201013T183157Z DOCKER_BUILD_ID: jenkins-master-flock-20201013T183157Z-builder OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201013T183157Z/logs MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20201013T183157Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock From build.starlingx at gmail.com Tue Oct 13 19:20:49 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Oct 2020 15:20:49 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 278 - Failure! Message-ID: <2016353678.213.1602616849809.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 278 Status: Failure Timestamp: 20201013T183157Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201013T183157Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From ildiko.vancsa at gmail.com Tue Oct 13 20:46:08 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 13 Oct 2020 22:46:08 +0200 Subject: [Starlingx-discuss] StarlingX TSC/PL/TL election - Nomination period ended Message-ID: <466E5270-9BEF-4565-8666-6F79419B45DE@gmail.com> Hi StarlingX Community, I would like to inform you that the nomination period[1] for the StarlingX TSC/PL/TL election has ended. Thank you to all candidates who submitted their nominations into this round. The election officials[2] are still finalizing some details and will come back with further updates shortly. Thank you, [1] https://docs.starlingx.io/election/ [2] https://docs.starlingx.io/election/#election-officials From austin.sun at intel.com Wed Oct 14 01:22:15 2020 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 14 Oct 2020 01:22:15 +0000 Subject: [Starlingx-discuss] Enabling CPU Manager In-Reply-To: References: Message-ID: Hi Sriram: You can check [1] and [2] . I think current implement is using host label (kube-cpu-mgr-policy) to enable this feature gate [1] https://storyboard.openstack.org/#!/story/2006565 [2] https://review.opendev.org/#/c/689609/ Thanks. BR Austin Sun. From: Dharwadkar, Sriram Sent: Wednesday, October 14, 2020 2:26 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Enabling CPU Manager Hi, I have installed StarlingX edge cloud with version 4.0 with one controller + 2 worker nodes. In this setup, how to enable CPU Manager, so that dedicated cpu's can be assigned to containers. I modified the file /var/lib/kubelet/config.yaml, to have below contents and rebooted the worker node featureGates: CPUManager: true cpuManagerPolicy: static cpuManagerReconcilePeriod: 5s systemReserved: cpu: 2000m kubeReserved: cpu: 2000m It has not taken effect. Any suggestions to enable CPU Manage ? Looks like documentation for this is not ready. Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Oct 14 02:20:44 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Oct 2020 22:20:44 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_installer_layered - Build # 258 - Still Failing! In-Reply-To: <473929632.208.1602616843995.JavaMail.javamailuser@localhost> References: <473929632.208.1602616843995.JavaMail.javamailuser@localhost> Message-ID: <700897919.216.1602642045631.JavaMail.javamailuser@localhost> Project: STX_build_installer_layered Build #: 258 Status: Still Failing Timestamp: 20201014T022042Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201014T013251Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20201014T013251Z DOCKER_BUILD_ID: jenkins-master-flock-20201014T013251Z-builder OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201014T013251Z/logs MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20201014T013251Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock From build.starlingx at gmail.com Wed Oct 14 02:20:47 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Oct 2020 22:20:47 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 279 - Still Failing! In-Reply-To: <655853432.211.1602616847783.JavaMail.javamailuser@localhost> References: <655853432.211.1602616847783.JavaMail.javamailuser@localhost> Message-ID: <245930118.219.1602642047975.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 279 Status: Still Failing Timestamp: 20201014T013251Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201014T013251Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From haochuan.z.chen at intel.com Wed Oct 14 04:36:52 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Wed, 14 Oct 2020 04:36:52 +0000 Subject: [Starlingx-discuss] please review this patch - https://review.opendev.org/#/c/737228/ for Rook-Ceph In-Reply-To: References: <7740B5F1-BAD6-498D-8A6A-D14BEC7A628B@intel.com> Message-ID: From you comment, I have one concern. Kickstart will remove boot device and rootfs vg, lv and pv. In currently ks script, it will only check “vda vdb sda sdb dda ddb hda hdb nvme0n1 nvme1n1” in /dev/ But if user add host by such command, “system host-add -n -m -b sdc -r sdd”, by set boot device as sdc and rootfs as sdd. Ks will not remove vg and lv on these disk. BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, October 14, 2020 12:06 PM To: 'Poncea, Ovidiu' ; Hu, Yong ; Sun, Austin Cc: Penney, Don ; Waines, Greg ; Jones, Bruce E Subject: RE: please review this patch - https://review.opendev.org/#/c/737228/ for Rook-Ceph Thanks Ovidiu! I updated patch. Currently for rook, rook backup and restore only support not wipe osd. For rook, there is a rook-ceph-operator, which manage all the ceph cluster, such as launching mon, mgr, osd deployment, creating cephfs filesystem, pool and manage crushmap. So during restore, after k8s cluster restore, it will restore rook-ceph-operator’s status, which means re-deploy mon, mgr and osd deployment as it used to be, such restore is operator’s behavior, not by starlingx. If osd disk wiped, osd pods will launch fail. If force rook-ceph restore to enable wipe-osd, so there will be such process. 1, ansible restore k8s cluster (launch by starlingx) 2, k8s cluster resotre rook-ceph-operator(by k8s cluster) 3, rook-ceph-operator re-deploy osd pod, and launch fail 4, ansible restore script, read rook-ceph-operator status about osd deployment and cleanup these info and request rook-ceph-operator to exit osd deployment 5, ansible restore script, use read osd deployment info to re-launch osd-prepare job to initialization and launch osd deployment by rook-ceph-operator. I prefer rook-ceph only support not wipe osd case. If user want to wipe osd disk, they can remove rook-ceph application after restore. BR! Martin, Chen IOTG, Software Engineer 021-61164330 From: Poncea, Ovidiu > Sent: Tuesday, October 13, 2020 5:34 PM To: Hu, Yong >; Sun, Austin > Cc: Penney, Don >; Waines, Greg >; Jones, Bruce E >; Chen, Haochuan Z > Subject: RE: please review this patch - https://review.opendev.org/#/c/737228/ for Rook-Ceph Done, sorry for the required rework, we must be careful with this area as issues can be very problematic. From: Hu, Yong > Sent: marți, 13 octombrie 2020 04:19 To: Poncea, Ovidiu >; Sun, Austin > Cc: Penney, Don >; Waines, Greg >; Jones, Bruce E >; Chen, Haochuan Z > Subject: please review this patch - https://review.opendev.org/#/c/737228/ for Rook-Ceph Importance: High Hi Ovidiu, Could you please review this patch? Don wanted to get your feedback back to Sept 14 in order to move forward for merge. https://review.opendev.org/#/c/737228/ If you have any concerns, @Sun, Austin will book a half-hour meeting today (in your morning). Regards, Yong -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Oct 14 07:15:59 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 14 Oct 2020 03:15:59 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_installer - Build # 824 - Failure! Message-ID: <991980009.222.1602659760340.JavaMail.javamailuser@localhost> Project: STX_build_installer Build #: 824 Status: Failure Timestamp: 20201014T071001Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20201014T043000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20201014T043000Z DOCKER_BUILD_ID: jenkins-master-20201014T043000Z-builder OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20201014T043000Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20201014T043000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed Oct 14 07:16:02 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 14 Oct 2020 03:16:02 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 704 - Failure! Message-ID: <1535507748.225.1602659762794.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 704 Status: Failure Timestamp: 20201014T043000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20201014T043000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From Bill.Zvonar at windriver.com Wed Oct 14 12:31:55 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 14 Oct 2020 12:31:55 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Oct 14, 2020) Message-ID: Hi all, reminder of the TSC/Community call coming up later today. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20201007T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Matt.Peters at windriver.com Wed Oct 14 13:17:38 2020 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 14 Oct 2020 13:17:38 +0000 Subject: [Starlingx-discuss] Regarding stx-monitor In-Reply-To: References: Message-ID: <6787C1DA-73E2-4AF6-B569-8557D8AD0E54@windriver.com> Hi Amit, The stx-monitor Armada application is not being actively maintained since there wasn’t much interest from the community in continuing to support it. The individual container services can still be deployed using Helm on StarlingX if you require. There are also several other projects within the CNCF landscape for monitoring that can also be considered. https://landscape.cncf.io/category=observability-and-analysis&format=card-mode&grouping=category From: Amit Mahajan Date: Monday, October 12, 2020 at 8:41 AM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Regarding stx-monitor Hi All, We are using StarlingX R4.0. For analyzing a few issues, we are exploring the stx-monitor application. Could you please let us know the following: * Is there any documentation that can guide how to install the stx-monitor application? * Will we be able to get logs for the OpenStack pods that died and were removed? * Does stx-monitor monitors and records metrics such as CPU and Memory usage, and are these available for post mortem analysis? * Does stx-monitor also periodically monitor hosts' (controllers & worker nodes') CPU, Memory etc. and are these metrics available for post mortem analysis? Regards, Amit -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Oct 14 14:43:42 2020 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 14 Oct 2020 14:43:42 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (Oct 14, 2020) In-Reply-To: References: Message-ID: >From today... * Standing Topics * Sanity * 2 green sanities since last week, but now build failures * per Scott, the build will be fixed shortly & restarted * Gerrit Reviews in Need of Attention * per Austin, Rook reviews are continuing * per Cole, need a non-WR +1 for https://review.opendev.org/#/c/756717/ * Bruce just gave it a +1 * other non-WR TSC folks to check & +1 * Topics for this Week * Nic looking for helping creating a devel branch for the test repo * Nic will send Scott the details, Scott will help * ARs from Previous Meetings * nothing this week * Open Requests for Help * StarlingX R4.0 Baremetal Duplex standard controller storage ironic deployment mode: Baremetal node not creation issue. * http://lists.starlingx.io/pipermail/starlingx-discuss/2020-October/009758.html * AR: Austin / Mingyuan will check & respond * Build Matters (if required) * nothing this week -----Original Message----- From: Zvonar, Bill Sent: Wednesday, October 14, 2020 8:32 AM To: StarlingX ML Subject: Community (& TSC) Call (Oct 14, 2020) Hi all, reminder of the TSC/Community call coming up later today. Please feel free to add items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20201007T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From scott.little at windriver.com Wed Oct 14 14:50:43 2020 From: scott.little at windriver.com (Scott Little) Date: Wed, 14 Oct 2020 10:50:43 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 704 - Failure! In-Reply-To: <1535507748.225.1602659762794.JavaMail.javamailuser@localhost> References: <1535507748.225.1602659762794.JavaMail.javamailuser@localhost> Message-ID: A stray 'i' (as in vi's insert mode) got into the debranding code delivered yesterday. It only breaks the rebuild of the installer, so unlikely to affect typical designer workflows. A fix has been delivered. Scott On 2020-10-14 3:16 a.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_master > Build #: 704 > Status: Failure > Timestamp: 20201014T043000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20201014T043000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Oct 14 14:54:24 2020 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 14 Oct 2020 14:54:24 +0000 Subject: [Starlingx-discuss] MoM: Weekly StarlingX non-OpenStack distro meeting, 10/14/2020 Message-ID: Hi All: Thanks join. MoM 10/14 meeting: 1) rook ceph: a) discuss rook ceph dedicated storage setup migration solution https://review.opendev.org/#/c/757087/ https://etherpad.opendev.org/p/stx-rook-ceph-migration. b) rook ceph patches status https://review.opendev.org/#/q/status:open++branch:master+topic:%22ceph+containerization%22 https://review.opendev.org/#/c/737228/ --- martin has addressed the review comments , need review again https://review.opendev.org/#/c/734065/ --- martin has addressed the review comments , need review again Thanks. BR Austin Sun. From: Sun, Austin Sent: Wednesday, August 19, 2020 10:06 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] MoM: Weekly StarlingX non-OpenStack distro meeting, 8/19/2020 Hi All: MoM for 8/19 meeting: - Stx.4.0 Release announced - Ceph containerization: status: http://lists.starlingx.io/pipermail/starlingx-discuss/2020-August/009452.html Patches Call for review: https://review.opendev.org/#/q/status:open+branch:master+topic:%22ceph+containerization%22 - Centos8: call for help status: http://lists.starlingx.io/pipermail/starlingx-discuss/2020-July/009227.html rpm comparation master /centos8 branch https://drive.google.com/drive/folders/1_TQwFsQSiVdsN5xWv4D3jajkmxWiJ3KV?usp=sharing open patches https://review.opendev.org/#/q/topic:centos8+branch:f/centos8+status:open - Open: Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Oct 14 14:54:59 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 14 Oct 2020 16:54:59 +0200 Subject: [Starlingx-discuss] 2020 H2 TSC/PL/TL election results Message-ID: <185C4862-20FC-4EEB-A572-C506D1BA1BDD@gmail.com> Hi, The 2020 H2 StarlingX elections are now over. Congratulations to all the new and reelected TSC members, PLs and TLs. You can find all the candidates for the 2020 H2 StarlingX elections here: https://opendev.org/starlingx/election/src/branch/master/candidates/2020_H2 The following patch is updating the projects.yaml and TSC member files with the results where we have changes: https://review.opendev.org/#/c/758146/ We have 1 new PL and also 4 new TLs and are still looking for to fill 2 PL and 1 TL roles that the TSC can do by appointing new leaders in the absence of eligible candidates. Thank you to all the candidates for running in the elections. If you have any questions please reply to this thread or reach out to the election officials[1]. Thanks and Best Regards, [1] https://docs.starlingx.io/election/#election-officials From bruce.e.jones at intel.com Wed Oct 14 16:14:48 2020 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 14 Oct 2020 16:14:48 +0000 Subject: [Starlingx-discuss] TSC call minutes Oct 14 2020 Message-ID: 10/14/2020 * Elections update o No need for the voting period, not enough candidates o Welcome new TSC members Dariush, Greg and Mingyuan!! o We have the following positions unfilled: ? Build TL - Scott will take this role ? Distro PL - volunteer needed, Ildiko to check with the team ? Docs TL - Mary will take this role ? MultiOS PL - Saul will take this role ? Release TL - Ghada will take this role ? Security TL - Ghada will take this role ? Test PL - Ildiko will reach out to the team to fill these roles. ? Test TL o From the above we usually have one person filling both the PL and TL roles for Docs, Release and Security - should we continue with that approach? Yes :) * PTG planning and agenda o Expecting feature leads for the proposed 5.0/4.1 features to sign up to lead discussions on their features o Other PTG topics? Cross-project discussions? o https://etherpad.opendev.org/p/stx-ptg-planning-october-2020 o Bruce has updated the PTG etherpad with candidate features and presenters from the release planning sheet Everyone - Please review the PTG etherpad - feel free to add topics to be discussed and change the names of the presenters I put there as needed. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Oct 14 17:54:41 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 14 Oct 2020 13:54:41 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_populate - Build # 919 - Failure! Message-ID: <1917927460.230.1602698081991.JavaMail.javamailuser@localhost> Project: STX_build_populate Build #: 919 Status: Failure Timestamp: 20201014T175433Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20201014T174137Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-compiler/20201014T174137Z DOCKER_BUILD_ID: jenkins-master-compiler-20201014T174137Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-compiler/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20201014T174137Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/compiler/20201014T174137Z/logs MASTER_JOB_NAME: STX_build_layer_compiler_master_master LAYER: compiler MY_REPO_ROOT: /localdisk/designer/jenkins/master-compiler BUILD_ISO: false From build.starlingx at gmail.com Wed Oct 14 17:54:43 2020 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 14 Oct 2020 13:54:43 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_layer_compiler_master_master - Build # 313 - Failure! Message-ID: <264344504.233.1602698084317.JavaMail.javamailuser@localhost> Project: STX_build_layer_compiler_master_master Build #: 313 Status: Failure Timestamp: 20201014T174137Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20201014T174137Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: true From maryx.camp at intel.com Wed Oct 14 20:33:35 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 14 Oct 2020 20:33:35 +0000 Subject: [Starlingx-discuss] StarlingX Docs Team Call Message-ID: 3:30 pm Eastern - 12:30 pm Pacific - Docs Team Call Call details * https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o Passcode: 419405 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes * The agenda and notes for each call are kept here: https://etherpad.openstack.org/p/stx-documentation * Call recordings: https://wiki.openstack.org/wiki/Starlingx/Meeting_Logs#Docs_Team_Call -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2624 bytes Desc: not available URL: From maryx.camp at intel.com Wed Oct 14 21:15:37 2020 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 14 Oct 2020 21:15:37 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 2020-10-14 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 2020-10-14  . All -- reviews merged since last meeting:  0   . All -- bug status -- 12 total - team agrees to defer all low priority LP until the upstreaming effort is completed.  o 2 more LP submitted by https://launchpad.net/~leiyuehui against API documentation, which is generated from source code. (6 total, low priority).   AR Mary follow up with submitter. . Reviews in progress:    o Several reviews related to Rook are being worked: https://review.opendev.org/#/q/status:open+project:starlingx/docs+branch:master+topic:ceph-cluster-editorial o Scott's review, part of the debranding:  https://review.opendev.org/#/c/757932/  Upstreaming WR docs status . Display issues with code samples and scroll bars. We discovered that missing scroll bars in wide tables and code snippets were browser discrepancies, not Sphinx issues. Can't find an official browser support statement in StarlingX, but did find one for OpenStack Horizon: https://docs.openstack.org/horizon/latest/user/browser_support.html  Ron has Firefox, so does Greg. Don't need to create a whole test plan for multiple browsers, keep an eye on upstream look and feel during conversion. Team agreed to add a statement something like this:  StarlingX documentation is best displayed using Firefox and Chrome browsers. Other browsers may not display certain template formatting. . We are using 'rubrics' now to include subheaders in tasks, such as "Prerequisites", "About this Task", "Procedure" etc. This prevents them from being captured in the TOC like 'normal' headers. But they render as plain paragraph text in tox builds. Looking at the source for an example, I see that we have a class: 

Procedure

but it does not seem to be defined in the StarlingX templates. Likely to be defined in a stylesheet, figure out which one is loaded onto the page. << AR Mary and Ron look for this. Possible locations below. o OpenStack doc tool scripts. Overview here:  https://docs.openstack.org/doc-contrib-guide/doc-tools/scripts.html, repo here: https://opendev.org/openstack/openstack-doc-tools o OpenStack Docs Theme. Overview: https://docs.openstack.org/openstackdocstheme/latest/, repo here:  https://opendev.org/openstack/openstackdocstheme/src/branch/master o OpenStack template generator. Overview: https://docs.openstack.org/doc-contrib-guide/doc-tools/template-generator.html, repo here:  https://opendev.org/openstack/openstack-manuals/src/branch/master/www/templates . We have some very complex tables, for example, for alarm and event definitions. The spans in these seem to confuse the StarlingX table templates that control row background color alternation (white, grey, white ...). See the attached screen. Not sure what the solution is here. The information may need to be refactored to avoid this behavior.  o The alternating color rows are mixed up. Can we turn that off? May be other options to resolve this. Ron will dig into it and report back.  . Dealing with content that is both upstream/downstream. How should we handle content that is WR-specific but embedded in shared files? For example, a topic has a feature with subsections and 1 of them is a WR-only item. May use includes or other tags to work around this. Ron showed us a potential implementation of this, using the .. only directive [display only if docs are built using the "partner" flag] and .. include directive [include the appropriate RST file] Directives will be placed at the end of a file. The team agreed this is a good approach. The folks doing the conversion will implement this and we can review it.  . TOC mapping discussion (updated doc from Juanita) - The text in purple indicates new topics, since Greg created the comparison docs with an earlier release of WRCP.  Greg had to drop off our call for another meeting. Greg may be unavailable to do a complete review of the doc set to evaluate all the new topics. Maybe he can focus on the topics in purple (identified as questions). WR has the lead on this and willl let us know if we can help. AR Juanita to bring up at their Friday meeting.  . Related topic to bring up with Greg on Friday:  We want to discuss the features/functionality of STX December release as compared to the WRCP GA release also in December. This may impact which doc content is published on the STX website. For example, if GA release supports a feature that is not supported on STX, then we wouldn't want to confuse STX users by describing it. Discuss with Greg and team about the best way to align content with functionality.  Display multiple versions of STX docs - no update.  From bnovickovs at weecodelab.com Wed Oct 14 21:21:37 2020 From: bnovickovs at weecodelab.com (bnovickovs at weecodelab.com) Date: Wed, 14 Oct 2020 22:21:37 +0100 Subject: [Starlingx-discuss] Cilium over Calico Message-ID: <1a1a81f89dbf32fdbb5ea7f12db7f0e7@weecodelab.com> Hi, I am wondering, if there is possibility of using Cilium CNI instead of Calico? According to this comparison https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-updated-august-2020-6e1b757b9e49 ; Calico is still better in terms of performance/resource consumption. However, if we consider production-grade bare metal k8s cluster, Cilium offers: - better visibility/observation - encryption - and it seems like comparing to Calico, Cilium project seems to be more active. Thank you From bnovickovs at weecodelab.com Wed Oct 14 21:34:29 2020 From: bnovickovs at weecodelab.com (bnovickovs at weecodelab.com) Date: Wed, 14 Oct 2020 22:34:29 +0100 Subject: [Starlingx-discuss] Cilium over Calico In-Reply-To: <1a1a81f89dbf32fdbb5ea7f12db7f0e7@weecodelab.com> References: <1a1a81f89dbf32fdbb5ea7f12db7f0e7@weecodelab.com> Message-ID: <81fd4bb87f2835d8537ffadf2e1d5b17@weecodelab.com> On 2020-10-14 22:21, bnovickovs at weecodelab.com wrote: > Hi, > > I am wondering, if there is possibility of using Cilium CNI instead of > Calico? > > According to this comparison > https://itnext.io/benchmark-results-of-kubernetes-network-plugins-cni-over-10gbit-s-network-updated-august-2020-6e1b757b9e49 > ; Calico is still better in terms of performance/resource consumption. > However, if we consider production-grade bare metal k8s cluster, > Cilium offers: > - better visibility/observation > - encryption > - and it seems like comparing to Calico, Cilium project seems to be > more active. > > Thank you Forgot to mention Hubble https://github.com/cilium/hubble From zhipengs.liu at intel.com Thu Oct 15 00:50:19 2020 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 15 Oct 2020 00:50:19 +0000 Subject: [Starlingx-discuss] StarlingX R4.0 Baremetal Duplex standard controller storage ironic deployment mode: Baremetal node not creation issue. In-Reply-To: References: Message-ID: Hi Akshay, Which image are you using to create server? Does this image work on R3.0? You can also retry it with other centos image, make sure it is not the image issue first. Do we have one LP to track this issue? If not, could you help raise one and upload key logs so that Some guys to further check the detail info. Thanks! Zhipeng From: Akki yadav Sent: 2020年10月8日 17:25 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX R4.0 Baremetal Duplex standard controller storage ironic deployment mode: Baremetal node not creation issue. Hello Team, I hope all are well. Setup: StarlingX R4.0 Baremetal Duplex standard controller storage ironic deployment mode. (I followed the official starlingX documentation for deploying the setup.) Issue: At the time of "openstack server create" for launching baremetal node, I came across the following multiple observations: - Sometimes when I launch baremetal node on openstack, after one time pxe booting, the baremetal node goes down again and then comes up and goes into second time booting and gets stuck there in "Probing" state ( Seen on node's console) BUT according to openstack horizon, it is up and running and according to "openstack baremetal node show", it is in "Active" state. - And sometimes when i launch baremetal node on openstack, after one time pxe booting, the baremetal node goes down again and then comes up, the "spawning" state on openstack horizon goes into ERROR. Error seen in "nova-compute-ironic-0" container is : "ERROR nova.compute.manager [instance: edd447c6-12ac-49ba-b0bc-f419aff4892a] nova.exception.InstanceDeployFailure: Failed to provision instance edd447c6-12ac-49ba-b0bc-f419aff4892a: Timeout reached while waiting for callback for node 75210cc4-ad98-442d-ace1-89ce69467580" - The baremetal node always takes near about 2 hours to be in "available" state from "cleaning" and "clean-wait". Is it correct behaviour ? Please guide me how to resolve this. Regards Akshay -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sriram.Dharwadkar at commscope.com Thu Oct 15 06:46:45 2020 From: Sriram.Dharwadkar at commscope.com (Dharwadkar, Sriram) Date: Thu, 15 Oct 2020 06:46:45 +0000 Subject: [Starlingx-discuss] Enabling CPU Manager In-Reply-To: References: Message-ID: Hi Austin, Thanks for your reply. I was able to configure CPU Manager and test it. Regards, Sriram From: Sun, Austin Sent: Wednesday, October 14, 2020 6:52 AM To: Dharwadkar, Sriram ; starlingx-discuss at lists.starlingx.io Subject: RE: Enabling CPU Manager Hi Sriram: You can check [1] and [2] . I think current implement is using host label (kube-cpu-mgr-policy) to enable this feature gate [1] https://storyboard.openstack.org/#!/story/2006565 [2] https:/ External (austin.sun at intel.com) Report This Email FAQ Protection by INKY Hi Sriram: You can check [1] and [2] . I think current implement is using host label (kube-cpu-mgr-policy) to enable this feature gate [1] https://storyboard.openstack.org/#!/story/2006565 [2] https://review.opendev.org/#/c/689609/ Thanks. BR Austin Sun. From: Dharwadkar, Sriram > Sent: Wednesday, October 14, 2020 2:26 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Enabling CPU Manager Hi, I have installed StarlingX edge cloud with version 4.0 with one controller + 2 worker nodes. In this setup, how to enable CPU Manager, so that dedicated cpu's can be assigned to containers. I modified the file /var/lib/kubelet/config.yaml, to have below contents and rebooted the worker node featureGates: CPUManager: true cpuManagerPolicy: static cpuManagerReconcilePeriod: 5s systemReserved: cpu: 2000m kubeReserved: cpu: 2000m It has not taken effect. Any suggestions to enable CPU Manage ? Looks like documentation for this is not ready. Regards, Sriram -------------- next part -------------- An HTML attachment was scrubbed... URL: From yadav.akshay58 at gmail.com Thu Oct 15 07:04:02 2020 From: yadav.akshay58 at gmail.com (Akki yadav) Date: Thu, 15 Oct 2020 12:34:02 +0530 Subject: [Starlingx-discuss] StarlingX R4.0 Baremetal Duplex standard controller storage ironic deployment mode: Baremetal node not creation issue. In-Reply-To: References: Message-ID: Hi ZhipengS, I used Centos 8 qcow2 which I downloaded from the official website. Link to the image is : https://cloud.centos.org/centos/8/x86_64/images/CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2 With the above centos 8 image, the baremetal node stucks in probing state. But with Centos7 qcow2, baremetal node came up smoothly. Any idea what can be the issue and how can I use centos8 without being stuck at probing state ? Sure I will raise one to track this issue after this mails reply in case we still want it to be created. Regards Akshay. On Thu, Oct 15, 2020 at 6:20 AM Liu, ZhipengS wrote: > Hi Akshay, > > > > Which image are you using to create server? > > Does this image work on R3.0? > > You can also retry it with other centos image, make sure it is not the > image issue first. > > Do we have one LP to track this issue? If not, could you help raise one > and upload key logs so that > > Some guys to further check the detail info. > > > > Thanks! > > Zhipeng > > > > *From:* Akki yadav > *Sent:* 2020年10月8日 17:25 > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] StarlingX R4.0 Baremetal Duplex standard > controller storage ironic deployment mode: Baremetal node not creation > issue. > > > > Hello Team, > > > > I hope all are well. > > > > *Setup:* StarlingX R4.0 Baremetal Duplex standard controller storage > ironic deployment mode. (I followed the official starlingX documentation > for deploying the setup.) > > > > *Issue*: At the time of "openstack server create" for launching > baremetal node, I came across the following multiple observations: > > > > - Sometimes when I launch baremetal node on openstack, after one time pxe > booting, the baremetal node goes down again and then comes up and goes into > second time booting and gets stuck there in "Probing" state ( Seen on > node's console) *BUT* according to openstack horizon, it is up and > running and according to "openstack baremetal node show", it is in "Active" > state. > > > > > > - And sometimes when i launch baremetal node on openstack, after one time > pxe booting, the baremetal node goes down again and then comes up, the > "spawning" state on openstack horizon goes into ERROR. > > Error seen in "nova-compute-ironic-0" container is : > > "ERROR nova.compute.manager [instance: > edd447c6-12ac-49ba-b0bc-f419aff4892a] nova.exception.InstanceDeployFailure: > Failed to provision instance edd447c6-12ac-49ba-b0bc-f419aff4892a: Timeout > reached while waiting for callback for node > 75210cc4-ad98-442d-ace1-89ce69467580" > > > > > > - The baremetal node always takes near about 2 hours to be in "available" > state from "cleaning" and "clean-wait". Is it correct behaviour ? > > > > Please guide me how to resolve this. > > > > > > Regards > > Akshay > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sanjay.k.mukherjee at intel.com Thu Oct 15 04:48:59 2020 From: sanjay.k.mukherjee at intel.com (Mukherjee, Sanjay K) Date: Thu, 15 Oct 2020 04:48:59 +0000 Subject: [Starlingx-discuss] FM containerization Build issues Message-ID: HI Frank/Tyler/Scott As discussed in the containerization meeting sharing the build issues details need your inputs :- * Issue 1:- FM with Mario's changes throws horizon not ready error on 4.0 release onwards . Attaching the Armada log for reference . * Please let us know where else to look for to get more details of the issue * Need your inputs to move forward * Issue 2:- Docker build on the latest master branch is hanging, let us know if this is a known issue . Let us know if any specific log is needed * Issue 3:- Presence of only python 3 packages in the wheels tar. It causes issue while building in Python 2.7 environment http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/stx-centos-stable-wheels.tar We built using python 2.7 taking the wheel packages built locally. (docker_build_script_2.txt) I have attached the logs and the docker scripts . Please let us know if any further details are needed. Thanks and Regards, Sanjay Mukherjee -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: stx-openstack-apply_2020-10-10-09-37-15.log Type: application/octet-stream Size: 184926 bytes Desc: stx-openstack-apply_2020-10-10-09-37-15.log URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: docker_build_script_1.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: docker_build_script_2.txt URL: From alexandru.dimofte at intel.com Thu Oct 15 07:26:57 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 15 Oct 2020 07:26:57 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20201014T144554Z Message-ID: Sanity Test from 2020-October-14 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201014T144554Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201014T144554Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D10733.2D2570D0] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 20512 bytes Desc: image002.png URL: From ebiibe82 at gmail.com Thu Oct 15 07:32:26 2020 From: ebiibe82 at gmail.com (Amit Mahajan) Date: Thu, 15 Oct 2020 13:02:26 +0530 Subject: [Starlingx-discuss] Regarding stx-monitor In-Reply-To: <6787C1DA-73E2-4AF6-B569-8557D8AD0E54@windriver.com> References: <6787C1DA-73E2-4AF6-B569-8557D8AD0E54@windriver.com> Message-ID: Thanks for your response, Matt. We will see if we are able to deploy stx-monitor or not. Otherwise, we will explore other options. On Wed, Oct 14, 2020 at 6:47 PM Peters, Matt wrote: > Hi Amit, > > The stx-monitor Armada application is not being actively maintained since > there wasn’t much interest from the community in continuing to support it. > > The individual container services can still be deployed using Helm on > StarlingX if you require. > > There are also several other projects within the CNCF landscape for > monitoring that can also be considered. > > > https://landscape.cncf.io/category=observability-and-analysis&format=card-mode&grouping=category > > > > *From: *Amit Mahajan > *Date: *Monday, October 12, 2020 at 8:41 AM > *To: *"starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *[Starlingx-discuss] Regarding stx-monitor > > > > Hi All, > > > > We are using StarlingX R4.0. For analyzing a few issues, we are exploring > the stx-monitor application. Could you please let us know the following: > > - Is there any documentation that can guide how to install the > stx-monitor application? > - Will we be able to get logs for the OpenStack pods that died and > were removed? > - Does stx-monitor monitors and records metrics such as CPU and Memory > usage, and are these available for post mortem analysis? > - Does stx-monitor also periodically monitor hosts' (controllers & > worker nodes') CPU, Memory etc. and are these metrics available for post > mortem analysis? > > Regards, > > Amit > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Thu Oct 15 08:00:23 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Thu, 15 Oct 2020 08:00:23 +0000 Subject: [Starlingx-discuss] =?windows-1252?q?please_attend_today=91s_meet?= =?windows-1252?q?ing_for_rook_app=27s_patch_review?= Message-ID: Rook patch review for project rook-ceph https://review.opendev.org/#/c/716792/ meeting link https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Thu Oct 15 11:46:46 2020 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 15 Oct 2020 11:46:46 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20201014T235129Z Message-ID: Sanity Test from 2020-October-14 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201014T235129Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20201014T235129Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Regards, STX Validation Team [cid:image003.png at 01D10733.2D2570D0] Dimofte Alexandru Software Engineer Transportation Solutions Division Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 10911 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 20512 bytes Desc: image002.png URL: From Ghada.Khalil at windriver.com Thu Oct 15 13:28:58 2020 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 15 Oct 2020 13:28:58 +0000 Subject: [Starlingx-discuss] StarlingX Networking Sub-Project Meeting (bi-weekly) Message-ID: Re-sending with new zoom link Bi-weekly on Thursday 0615 PT / 0915 ET Zoom Link: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-networking Networking team wiki: https://wiki.openstack.org/wiki/StarlingX/Networking -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2805 bytes Desc: not available URL: From haochuan.z.chen at intel.com Thu Oct 15 14:32:14 2020 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Thu, 15 Oct 2020 14:32:14 +0000 Subject: [Starlingx-discuss] =?windows-1252?q?Please_attend_today=91s_meet?= =?windows-1252?q?ing_for_rook_app=27s_patch_review?= Message-ID: Thanks for attend the meeting for rook-ceph application introduction. This is the deployment doc for rook deployment. https://review.opendev.org/#/c/751158/ Please review my patch for rook. https://review.opendev.org/#/q/topic:%22ceph+containerization%22+(status:open+OR+status:merged) Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Thursday, October 15, 2020 4:00 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: please attend today‘s meeting for rook app's patch review Rook patch review for project rook-ceph https://review.opendev.org/#/c/716792/ meeting link https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 Thanks! Martin, Chen IOTG, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Oct 15 15:20:24 2020 From: scott.little at windriver.com (Scott Little) Date: Thu, 15 Oct 2020 11:20:24 -0400 Subject: [Starlingx-discuss] FM containerization Build issues In-Reply-To: References: Message-ID: <095d04c7-cfbc-013e-fdd7-b9364efc439e@windriver.com> Re Issue 2 Have you failed a launchpad? Please include ALL steps required to reproduce your build issue including setting environment variables, checking out source code (including how to cherry pick any of your changes), setting up docker, etc... and point out which command hung. Include the content of your localrc and buildrc Not knowing where you hung, it's hard to comment on which specific log would be most relevant. Based on your attachment, perhaps it was build-stx-images.sh? In that case it would likely be one of the logs matching the pattern... ${MY_WORKSPACE}/std/build-images/docker-${LABEL}-${OS}-${BUILD_STREAM}.log i.e. there is a separate log for each image. You can pass the '--only